The Department for Science, Innovation and Technology (DSIT) has announced £8.5 million in funding for research on AI safety.
It will be made available under the Systemic AI Safety Fasts Grants Programme, which will be led within the AI Safety Institute by its research director Christopher Summerfield and specialist researcher Shahar Avin, and carried out in partnership with UK Research and Innovation.
The organisations will soon invite applications for grant proposals from the public and private sectors that directly address systemic AI safety issues.
Technology Secretary Michelle Donelan announced the plan at the AI Seoul Summit, saying it will focus on how to protect society from AI risks such as deep fakes and cyber attacks. The most promising proposals will be developed into longer term projects and could receive further funding.
Applicants will have to be based in the UK but will be encouraged to collaborate with other researchers from around the world.
Ambitious yet urgent
Donelan said: “When the UK launched the world’s first AI Safety Institute last year, we committed to achieving an ambitious yet urgent mission to reap the positive benefits of AI by advancing the cause of AI safety.
“With evaluation systems for AI models now in place, Phase two of my plan to safely harness the opportunities of AI needs to be about making AI safe across the whole of society.
“This is exactly what we are making possible with this funding which will allow our institute to partner with academia and industry to ensure we continue to be proactive in developing new approaches that can help us ensure AI continues to be a transformative force for good.
“I am acutely aware that we can only achieve this momentous challenge by tapping into a broad and diverse pool of talent and disciplines, and forging ahead with new approaches that push the limit of existing knowledge and methodologies.”
International network of institutes
She has also announced that the UK has signed up to an agreement with 10 other countries and the EU to set up an international network of AI safety institutes, following the launch of the UK body last year.
The international network will promote collaboration between the institutes. This will include sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science around AI safety.
Earlier this week it was announced that the UK AI Safety Institute is to set up an office in San Francisco to build relationships with technology experts in the Bay Area.