A thinktank focused on global resilience has urged the UK Government to fill a gap in the regulation of AI with stronger incident reporting.
The Centre for Long Term Resilience (CLTR), has published a report on the issue with three recommendations to reduce the relevant risks.
It says there is a danger of the Department of Science, Innovation and Technology (DSIT) lacking visibility on a range of incidents, including those due to failures in public service AI systems, bias and discrimination in foundation models, the misuse of systems and harm from AI companions, tutors and therapists.
“DSIT lacks a central, up-to-date picture of these types of incidents as they emerge,” it says. “Though some regulators will collect some incident reports, we find that this is not likely to capture the novel harms posed by frontier AI.”
Low hanging fruit
It has recommended that the Government takes steps beginning with the creation of a system to report incidence of its own use of AI, which it describes as a “low hanging fruit”.
This could involve steps such as expanding the Algorithmic Transparency Reporting Standard to include a framework for reporting public sector incidents.
Secondly, the Government could commission regulators and consult experts on where there are the most concerning gaps. This is essential to ensure effective coverage of priority incidents and to understand the incentives required to establish a functional regime.
Thirdly is the need to build capacity inside DSIT to monitor, investigate and respond to incidents, possibly involving the creation of a pilot AI incident database.
Such steps could help to coordinate responses to major incidents where speed is critical, and make it possible to sound early warnings on larger scale harms that could arise in future, the report says.