Comment | Don't write off algorithms - responsible AI can produce real benefits
Keele's Dr Allison Gardner writes for The Conversation UK
Algorithms have taken a lot of flak recently, particularly those being used by the government and other public bodies in the UK. The controversial algorithm used to award student grades caused a huge public outcry, but national and local governments and several police forces have been withdrawing other algorithms and artificial intelligence tools from use throughout the year in response to legal challenges and design failures.
This has quite rightly brought it home to public sector organisations that a more critical approach to AI and algorithmic decision-making is needed. But there are many cases in which government bodies can deploy such technology in lower risk, high-impact scenarios that can improve lives, particularly if they don’t directly use personal data.
So before we leap full pelt into AI cynicism we should consider benefits as well as risks it offers, and demand a more responsible approach to AI development and deployment.
One example of this is the Intelligent Street Lighting project being trialled by Glasgow City Council. It uses an algorithm to process real time sensor data on noise, air pollution and footfall around the city and control street lighting in reaction to people’s use of cycle paths and open spaces.
The aim is to immediately improve safety but also allow for better city planning and environmental protection. Importantly, this project is being properly trialled and is open to public scrutiny, which will help address people’s concerns and needs.
Similarly, Liverpool City Council is working with the company Red Ninja on the Life First Emergency Traffic Control project, which aims to cut ambulance journey times by up to 40%. A new algorithm works within the existing traffic signal system to prioritise emergency vehicles, aiming to reduce congestion ahead of emergency vehicles and save critical minutes on ambulance response times.
Governments can also use AI for many low-risk jobs which do not directly aim to predict human behaviour or make decisions directly affecting individuals. For example, National Grid uses AI and drones to inspect 7,200 miles of overhead power lines in England and Wales.
It is are able to assess the steelwork, wear and corrosion and faults to conductors. This speeds up inspection, saving time and money and allows human engineers to focus on repairs and improvements, producing a more reliable energy supply.
The Driver and Vehicle Standards Agency (DVSA) has used AI to improve MOT testing by using AI to analyse the vast amount of testing data to develop risk scores for garages and identify potentially underperforming centres. This has reduced enforcement visits by 50%.
The counterpart Driver and Vehicle Licensing Agency (DVLA) used a natural language processing algorithm to develop a chatbot to deal with customer enquiries. This is integrated into a single customer service platform so that staff can monitor all customer interactions by phone, email, webchat and social media.
These examples show the potential for government to use AI successfully and responsibly. So how can public sector bodies ensure their algorithms manage this?
To begin with, there are numerous sets of guidelines they can follow, such as the OECD Principles on AI. These principles state that AI should be designed in a way that respects human rights, democratic values and diversity and include appropriate safeguards and monitoring of risks. There is a requirement for transparency and responsible disclosure so people understand the systems and can challenge them.
But guidelines aren’t necessarily enough. The UK government has published its own guidelines for trustworthy use of AI, and has invested significantly in numerous expert AI advisory bodies. Yet it has still managed to get many things wrong in its development of algorithms, as recent events have shown.
One reason for this is that there is little acceptance even now that AI technology is not good enough to safely be used on high-impact and high-risk cases, such as awarding grades and visas. Sometimes AI should not be a solution.
Laws and nudges
New laws regulating the use of AI could help, but few countries have yet to pass specific legislation. There are some good examples in development, such as the proposed US AI Accountability Bill. However, legislation moves slowly, is subject to significant lobbying and outstripped by the speed of tech innovation. So quicker nudges to responsible behaviour are needed.
The recent abandoning of certain government algorithms have shown that when the public is aware of poorly developed AI it can change government behaviour and create demand for more trustworthy use of technology. So one possible solution, called for by the researcher network Women Leading in AI, of which I am a founder, is an AI Infomark.
Any apps, websites or documents relating to government services, systems or decisions that use AI would display the mark to alert people of that fact and point them to information about how the AI works and its potential impact and risk. This is a citizen-first strategy designed to empower people to understand and challenge an algorithm or AI system that has affected them. And this should hopefully push government to make sure it gets things right in the first place.
If government can combine adequate regulation with this kind empowering, bottom-up approach to ensuring more responsible technology, we can start to reap the real benefits of greater use of algorithms and AI.
Allison Gardner, Lecturer in Computer Science/Co-founder Women Leading in AI, Keele University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Most read
- Astronomer from Keele helps take the first close-up picture of a dying star outside our galaxy
- Keele University signs official partnership with Cheshire College South & West
- Keele partners with regional universities to tackle maternity inequalities across the West Midlands
- Keele Business School MBA ranks in Top 40 for sustainability in prestigious global ranking
- Keele trains next generation of radiographers using virtual reality in regional first
Contact us
Andy Cain,
Media Relations Manager
+44 1782 733857
Abby Swift,
Senior Communications Officer
+44 1782 734925
Adam Blakeman,
Press Officer
+44 7775 033274
Ashleigh Williams,
Senior Internal Communications Officer
Strategic Communications and Brand news@keele.ac.uk.