Skip to the content

AI models: approach with caution, says Ada Lovelace Institute

03/10/23

Gary Flood Correspondent

Get UKAuthority News

Share

Abstract technology background - with 'AI' highlighted in centre
Image source: istock.com/amgun

A probe of the best way AI (Artificial Intelligence) models can be used by public sector organisations has been published by the Ada Lovelace Institute which finds the sector just at the start of understanding what opportunities and risks AI could bring. 

In its analysis, researchers warn that while there is huge potential in their use, it is still uncertain whether these models will be “accurate enough, reliable enough, and a good enough value-for-money to provide worthwhile solutions to existing problems”.

The models in questions are data structures that are capable of a range of tasks and applications, such as text, image or audio generation. Private sector examples already widely in use include OpenAI’s GPT-3 and GPT-4 (which underpin ChatGPT), but also for image generators like MidJourney. 

The body — an independent research institute and deliberative body with a mission to ensure data and AI work for people and society — describes these as ‘foundation models’.

The group points out that central government, local authorities and other public sector organisations are already considering how such foundation models can use them to assist with a broad spectrum of tasks, like decision making, information and research sharing. Key drivers for more deployment of systems based on such models are, says the Institute, are ongoing budgetary restraints and growing user needs. 

With an aim of enabling wider access to data, delivery of services and service provision monitoring, and even less ambitious tasks like generating emails, there is a great deal of discussion and hype around the opportunities foundation models could bring, says the organisation. But the new policy briefing states, “We are very much at the start of the foundation-model journey and at the start of our understanding of what opportunities and risks it could bring.” 

Risks associated with foundation models, it warns, include biases, privacy breaches, misinformation, security threats, overreliance, workforce harms and unequal access. That means public sector organisations need to consider these risks when developing their own foundation models and should require information about them when procuring and implementing external foundation models. And as AI technologies advance rapidly, Government must therefore consider carefully how to use foundation models in the public sector responsibly and beneficially.

Effective use of foundation models by public sector organisations will require them to carefully consider alternatives and counterfactuals - ie, comparing proposed use cases with more mature and tested alternatives that might be more effective or provide better value for money. Evaluating these alternatives should be guided by the Nolan Principles of Public Life, which include accountability and openness, it also suggests.

Improved governance of foundation models in the public sector will be necessary to ensure the delivery of public value and “prevent unexpected harms”. That could include, says the briefing:

  • regularly reviewing and updating guidance to keep pace with technology and strengthen their abilities to oversee new AI capabilities
  • setting procurement requirements to ensure that foundation models developed by private companies for the public sector uphold public standards
  • requiring that data used for foundation model applications is held locally
  • mandating independent third-party audits for all foundation models used in the public sector, whether developed in-house or externally procured
  • monitoring foundation model applications on an ongoing basis
  • continuing to implement the Algorithmic Transparency Recording Standard2 across the public sector
  • incorporating meaningful public engagement in the governance of foundation models, particularly in public-facing applications
  • piloting new use cases before wider rollout to identify risks and challenges
  • providing training for employees working with (either developing, overseeing or using) foundation models.

Foundation models may offer an opportunity to address certain challenges in public service delivery, but the sector must take coordinated action to develop and deploy them responsibly, safely and ethically, the guidance concludes.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.