Skip to the content

Ada Lovelace Institute sets three tests for AI regulation

18/07/23

Mark Say Managing Editor

Get UKAuthority News

Share

AI symbol on computer grid
Image source: istock.com/Black_Kira

The Ada Lovelace Institute has identified three tests to determine the success of any future regulations on the use of AI in the UK.

It has published a report on the issue that analyses the Government’s proposals and sets out recommendations within the context of the three tests.

This comes in response to the Government’s white paper on the issue, the Data Protection and Digital Information (DPDI) Bill, the Foundation Model Taskforce and the AI Safety Summit.

The report highlights perceived weaknesses and says the proposals will not provide sufficient safeguards to ensure AI systems are trustworthy and the risks mitigated.

In response, it calls for a rethink of elements of the bill and a review of rights and protections under existing legislation, along with the publication of a statement of rights and protections that people could expect when interacting with AI based products and services.

Coverage issue

The first of the three tests involves the coverage of regulation. The institute says the Government’s proposals devolve the regulation of AI to existing regulators with support from central functions, but that this will not cover all contexts in which it is used – citing policing and central government among the examples.

It says that legal analysis by data rights law firm AWO has found that in many contexts, the main protections offered by cross-cutting legislation such as the UK General Data Protection Regulation and the Equality Act may often fail to protect people from harm or give them a viable route to redress.

To improve coverage, the Institute recommends considering an AI ombudsman to directly support people affected by AI, reviewing existing protections, legislating to introduce better protections where necessary, and rethinking the DPDI Bill due to its implications for AI regulation.

The second test is around capability, reflecting the resource-intensive nature of regulating AI. The report says regulators must be given the right resources and powers, and there should be a new statutory duty for them to have regard to AI principles. This should be accompanied by funding for civil society involvement in regulation.

Thirdly, there is an issue of urgency, with significant harms already associated with the use of AI and a need to act more quickly than the Government’s timeline of at least a year before implementation.

Robust governance

In response, the institute is calling for robust governance of foundation models, underpinned by legislation, and for a review of how existing legislation can be applied to these models.

It also recommends mandatory reporting requirements for foundation model developers, pilot projects to develop better expertise and monitoring in government, and a more diverse range of voices at the AI Safety Summit.

Michael Birtwistle, associate director at the Ada Lovelace Institute, said:The Government rightfully recognises that the UK has a unique opportunity to be a world leader in AI regulation and the prime minister should be commended for his global leadership on this issue.

“However, the UK’s credibility on AI regulation rests on the Government’s ability to deliver a world leading regulatory regime at home. Efforts towards international coordination are very welcome, but they are not sufficient. The Government must strengthen its domestic proposals for regulation if it wants to be taken seriously on AI and achieve its global ambitions.”

The report has been based on workshops with various experts, legal analysis by data rights law firm AWO and extensive desk research.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.