Skip to the content

AI Safety Institute becomes AI Security Institute

14/02/25

Mark Say Managing Editor

Get UKAuthority News

Share

AI icon
Image source: istock.com/Black Kira

The Department for Science, Innovation and Technology (DSIT) has announced the renaming of the AI Safety Institute to the AI Security Institute and the creation of a criminal misuse team to work within it.

The announcement comes after a degree of controversy over signs of the UK Government reducing its focus on AI safety, and its refusal with the US to sign a new international agreement on AI, as reported by the BBC.

DSIT said the change of name for the institute – which was set up in November 2023 to evaluate the risks in frontier AI – reflects its focus on serious AI risks with security implications. These include how the technology can be used to develop chemical and biological weapons, and how it can be used to carry out cyber attacks and enable crimes such as fraud and child sexual abuse.

The institute will work with other partners across government, including the Defence Science and Technology Laboratory, to assess the risks posed by frontier AI.  

The criminal misuse team will work jointly with the Home Office to conduct research on a range of crime and security issues which threaten to harm British citizens. 

One area of focus will be the use of AI to make child sexual abuse images, with the new team exploring methods to help to prevent abusers from harnessing the technology to carry out their crimes. This will support work announced earlier this month to make it illegal to own AI tools which have been optimised to make images of child sexual abuse.  

Protecting citizens

Technology Secretary Peter Kyle said: “The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values and way of life.

“The main job of any government is ensuring its citizens are safe and protected, and I'm confident the expertise our institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.”

Chair of the AI Security Institute Ian Hogarth said: “The institute’s focus from the start has been on security and we’ve built a team of scientists focused on evaluating serious risks to the public.

“Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.“

Disappointment

The announcement prompted an expression of disappointment from the Ada Lovelace Institute, which works on ensuring data and AI is used for the good of society.

Its associate director Michael Birtwhistle said: "Following its decision to not sign the Paris declaration, the UK Government appears to be pivoting away from ‘safety’ towards ‘national security’, signalling that its flagship AI institution might be focusing exclusively on a far narrower set of concerns and risks.

"Addressing national security risks is an essential duty of any government. But the need to ensure that AI developed or deployed in the UK is safe and trustworthy has not gone away, nor has the need to understand the broader impacts of AI on inequalities, on jobs and on the environment."

He added: "While fast action on criminal misuse of AI is laudable, we are deeply concerned that any attention to bias in AI applications has been explicitly cut out of the new AISI’s scope. A more pared back approach from the Government risks leaving a whole range of harms to people and society unaddressed – risks that it has previously committed to tackling through the work of the AI Safety Institute. It’s unclear if there’s still a plan to meaningfully address them, if not in AISI.

"The International AI Safety Report is clear that there are well- stablished harms of AI related to bias, discrimination and privacy. One of its central findings is that AI systems can amplify social and political biases, causing concrete harm and discriminatory outcomes. The Government appears to be signalling it no longer sees bias and discrimination as a priority concern.

"We know the public don’t want to see AI adoption without any assurance of safety or risk mitigation. This significant narrowing of focus is out of step with what the public wants and expects."

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.