Skip to the content

Turing Institute highlights unintended risks to national security in generative AI

02/01/24

Mark Say Managing Editor

Get UKAuthority News

Share

Hand turning Risk dial
Image source: istock.com/NicoElNino

Generative AI could cause significant harm to the UK’s national security and public services in unintended ways, according to report from the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute.  

Published last month and titled The Rapid Rise of Generative AI: Assessing risks to safety and security, it says the threats go beyond those involving groups or individuals who set out to inflict harm using the technology.

There are also unintentional risks posed by improper use and experimentation with generative AI tools, and excessive risk taking as a result of over-trusting AI outputs and a fear of missing out on the latest technological advances.

The research team consulted with over 50 experts across government, academia, civil society and leading private sector companies, with most deeming that unintended harms are not receiving adequate attention compared to adversarial threats that national security agencies are accustomed to facing.

CETaS identifies potential vulnerable areas including the use of AI in public services and in critical national infrastructure and its supply chains. It highlights explicitly malicious purposes to which the technology could be applied, including new methods of cyber attack, the promotion of terrorism, political disinformation and the connection of pretrained models into physical weapons systems.

Bias and inaccuracy

But it also outlines another tranche of risks such as embedding biased social values and political leanings into any public services in which the technology could be used, which could undermine public trust. This would intensify through the use of large language models (LLMs) when it is difficult to trust how they have been trained.

There is also a risk of undermining critical national infrastructure through feeding inaccurate information into the supply chain when people are designing documents, sending emails and making decisions with the support of generative AI.

In addition, any deployment in national security might reduce the reliance on a small number of people who have been trained to anticipate adversarial threats.

The report makes a series of recommendations, including the development of a multi-layered, socio-technical approach to system evaluation, the creation of a centralised register for generative AI model and system cards. The latter would enabled decision makers across departments to make informed judgements about their risk appetite.

If generative AI is to be deployed operationally in national security, the user interfaces should be designed to include explicit warnings about the accuracy and reliability of outputs; and there should be a consideration of how the use of large language models (LLMs) in the sector might affect warranty and legal compliance.

CETaS also calls on the National Cyber Security Centre and Cabinet Office to develop guidance for the safe use of generative AI across government, and the Home Office to commission research for a more rigorous evidence base for the terrorist uses of the technology.

There is also a need for research on voice cloning in the context of national security, and a case for developing evaluation metrics.

Ardi Janjeva, research associate at CETaS, said: “Generative AI could offer opportunities for the national security community, but it is currently too unreliable and susceptible to errors to be trusted in the highest stakes contexts.  Policy makers must change the way they think and operate to make sure that they are prepared for the full range of unintended harms that could arise from improper use of generative AI, as well as malicious uses.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.