Skip to the content

UK Government launches AI Safety Institute

03/11/23

Mark Say Managing Editor

Get UKAuthority News

Share

Hand holding AI icon
Image source: istock.com/Sutthiphong Chandaeng

The UK Government has launched the AI Safety Institute – claimed to be the world’s first of its kind – to evaluate the risks of frontier AI models.

It announced the launch following this week’s international AI Safety Summit, along with an agreement among governments to co-operate in AI safety testing.

The institute is evolving from the Frontier AI Taskforce and will test new types of frontier AI – general purpose models that can perform a wide variety of tasks – before they are released to address potentially harmful capabilities. This will involve exploring risks such as social harms, bias and misinformation, up to the dangers of humans losing control of the technology.

It will be chaired by Ian Hogarth, who led the taskforce, and supported by an external advisory board of experts in fields such as national security and cyber security.

It will also look to work closely with the Alan Turing Institute – the national body for data science and AI.

The Prime Minister’s Office said the move has the backing of leading AI companies and other nations involved in the summit.

Global hub

Prime Minister Rishi Sunak said: “Our AI Safety Institute will act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast moving technology.

“It is fantastic to see such support from global partners and the AI companies themselves to work together so we can ensure AI develops safely for the benefit of all our people. This is the right approach for the long term interests of the UK.”

Secretary of State for Science, Innovation, and Technology, Michelle Donelan said: “The AI Safety Institute will be an international standard bearer. With the backing of leading AI nations, it will help policy makers across the globe in gripping the risks posed by the most advanced AI capabilities, so that we can maximise the enormous benefits.”

Participants in the summit also agreed on a shared ambition to invest in public sector capacity for testing and other safety research, and to share outcomes of evaluations with other countries where relevant. It will involve testing models before and after deployments, with governments working with AI companies.

They also agreed to work towards developing, in due course, shared standards in this area.

Research on risks

An additional initiative will involve a scientific assessment of existing research on the risks and capabilities of frontier AI and set out the priority areas for further research to inform future work on AI safety.

Professor Yoshua Bengio, a Turing Award winning AI academic and member of the UN’s Scientific Advisory Board, will lead the research that is planned to result in the publication of the first ever frontier AI State of the Science report.

Its findings will support future AI Safety Summits, plans for which have already been set in motion. The Republic of Korea has agreed to co-host a mini virtual summit in the next six months, and France will host the next in-person event in a year from now.  

Sunak commented: “Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree.

“Today we’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released.”

Bletchley Declaration

The announcements came the day after participants in the AI Safety Summit agreed on the Bletchley Declaration on addressing the risks of frontier AI.

It covers two main areas of focus, the first being to work on identifying risks of shared concern, building a shared understanding them and sustaining it as capabilities increase.

The second is to build risk based polices across countries, collaborating as appropriate while recognising that approaches may differ based on national circumstances and legal frameworks. This will include the development of evaluation metrics, tools for safety testing and building a relevant public sector capability.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.