Skip to the content

Alan Turing Institute warns of security risk to AI research

10/03/25

Mark Say Managing Editor

Get UKAuthority News

Share

Alert sign on digital grid
Image source: istock.com/Sashkinw

Urgent action is needed to secure the UK’s AI research ecosystem against hostile threats such as espionage, theft and duplicitous collaboration, the Alan Turing Institute has warned.

Its Centre for Emerging Technology and Security (CETaS) has published a report, Securing the UK’s Research Ecosystem, responding to growing fears that the UK’s AI research is a high priority target for state threat actors seeking technological advantage.

The concerns are heightened due to the use of sensitive datasets, the dual use nature of the technology (which can be repurposed and applied to tasks that were not originally intended) and the possibility of reverse engineering (for example, tools designed to counter misuse of AI systems being converted to help attackers evade detection).

The report argues that awareness of security risks is not consistent across the academic sector and there is a lack of incentives for researchers to follow existing government guidance on research security. 

This creates an urgent need for culture change, including balancing the tension between security and the pressures on academics to publish their research. 

The report also highlights difficulties for academics both in assessing risks of their research – for example future misuse – and the need to carry out time consuming due diligence processes on international research partners, without a clear view of the current threats.

Top priority

Megan Hughes, research associate at the Alan Turing Institute and lead author of the report, said: “Furthering AI research is rightly a top priority for the UK, but the accompanying security risks cannot be ignored as the world around us grows ever more volatile.

“Academia and the Government must commit to and support this long overdue culture change to strike the right balance between academic freedom and protecting this vital asset.”

The report argues that an urgent, coordinated response between the UK Government and the higher education sector is needed and provides 13 recommendations to help build the resilience of the research ecosystem.

These include a need for regular guidance from the Department for Science, Innovation and Technology, with support from the National Protective Security Authority (NPSA), on the international institutions deemed high risk for funding agreements and collaborations.

This should come with more dedicated funding to grow the Research Collaboration Advice Team to support academic due diligence.

Threat landscape

In addition, NPSA and the national Cyber Security Centre should engage more widely with UK based publishing houses, academic journals and other research bodies to brief senior decision makers on the threat landscape and offer tailored support for developing relevant policies.

NPSA has also been urged to declassify and publish case studies of threats that have been intercepted or disrupted.

The report also urges UK Research and Innovation (UKRI) to provide grant funding opportunities for activities in research security.

Among the recommendations for academia are that all academic institutions should be required to deliver NPSA accredited research security training to new staff and postgraduate research students as a prerequisite for grant funding. 

The sector should develop a centralised due diligence repository to inform research partnerships and collaboration, hosted by a trusted partner such as Universities UK or UKRI.

The report’s authors also want to see pre-publication risk assessment for AI research standardised across major AI journals and academic publishing houses, aligned with existing research ethics review processes.
 

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.