The AI Safety Institute is to make £8.5 million available for research projects on the safety of AI systems.
It will also support research to tackle the threat of systems failing unexpectedly.
The institute has launched the funding scheme under Systemic Safety Grants Programme in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, part of UK Research and Innovation (UKRI).
They will initially provide backing for around 20 projects with up to £200,000 each for projects that aim to deepen understandings over what challenges AI is likely to pose to society in the near future.
The additional cash will be made available as further phases are launched.
Systemic AI safety is focused on the systems and infrastructure where AI is being deployed across different sectors. The institute emphasised its importance in critical sectors such as healthcare and energy services.
Broader understanding
Institute chair Ian Hogarth, said: “This grants programme allows us to advance broader understanding on the emerging topic of systemic AI safety. It will focus on identifying and mitigating risks associated with AI deployment in specific sectors which could impact society, whether that’s in areas like deepfakes or the potential for AI systems to fail unexpectedly.
“By bringing together researchers from a wide range of disciplines and backgrounds into this process of contributing to a broader base of AI research, we’re building up empirical evidence of where AI models could pose risks so we can develop a rounded approach to AI safety for the global public good.”
Secretary of State for Science, Innovation, and Technology Peter Kyle said: “My focus is on speeding up the adoption of AI across the country so that we can kickstart growth and improve public services. Central to that plan though is boosting public trust in the innovations which are already delivering real change.
“That’s where this grants programme comes in. By tapping into a wide range of expertise from industry to academia, we are supporting the research which will make sure that as we roll AI systems out across our economy, they can be safe and trustworthy at the point of delivery.”
Applicants have until 26 November to submit their proposals and will be assessed on the potential issues their research could solve and what risks it addresses. Successful applicants will be confirmed in the end of January 2025, with the first round of grants to be awarded in February.