Skip to the content

Ada Lovelace Institute highlights issues for AI in public services

14/03/25

Mark Say Managing Editor

Get UKAuthority News

Share

Digital streams emerging from human head
Image source: istock.com/Monstij

The Ada Lovelace Institute has set out a number of key issues to be addressed in increasing the use of AI in public services.

It has published a briefing paper, Learn Fast and Build Things, based on its studying the debate and use of AI in the public sector over the past six years.

The institute – whose mission is to ensure that data and AI work for people and society – says the ‘lessons for success’ are not comprehensive but have consistently reoccurred in case studies, and that it believes they can support governments aiming to accelerate the use of AI in services.

The lessons are grouped into four sections: contextualise AI; learn what works; deliver on public expectations and public sector values; and think beyond the technology.

The first includes the need to adopt a clear terminology, reflecting a current confusion of definitions on what AI actually is that is inhibiting learning and effective use.

It points to i.AI’s recent publication of a taxonomy for AI in government, and work to classify AI according to broad public sector use cases, but says there is a lot more to do in the area.

This comes with the common assertion that AI is only as good as the data underpinning it, and that context is important as the technology is not deployed in a vacuum but in complex social systems.

For learning what works, the paper says that governments should prioritise the establishment of structured approach to assess the effectiveness of AI in public sector contexts, and to ensure the technology delivers genuine value while maintaining accountability.

This will have to address current shortcomings of there being no comprehensive view of where AI is being deployed in government and public services, and not enough evidence of its effectiveness.

Meeting expectations

To deliver on public expectations and public sector values there is need to develop systems that are ethically as well as technically sound, and properly governed and deployed.

Currently, this is hindered by gaps in AI governance and the fact, in the institute’s view, that public procurement of the technology is not fit for purpose.

“Pursuing a high standard of transparency around procurement of AI can contribute to public trust and fairness,” the paper says. “If both procurers and companies know they will be scrutinised on decisions, there is more incentive to build in fairness and anticipate or flag damaging outcomes.”

It concludes with the need to think beyond technology, saying that AI should not be seen as an opportunity to automate the public sector but to reimagine it, and it will have wider societal consequences.

“AI should be viewed as a potential catalyst for fundamental service redesign, placing the citizen at the centre of public service delivery rather than focusing solely on immediate efficiency gains or automating the status quo,” it says.

“Through meaningful engagement with the public and relevant professions, governments can develop a shared understanding between citizens, staff and wider society of where AI has the potential to help reimagine more relational, effective and legitimate public services.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.