Skip to the content

NCSC chief calls for security in AI development

15/06/23

Mark Say Managing Editor

Get UKAuthority News

Share

Lindy Cameron
Lindy Cameron
Image source: GOV.UK, Open Government Licence v3.0

The head of the National Cyber Security Centre (NCSC) has emphasised the importance of building security into AI technologies from their outset.

Chief executive officer Lindy Cameron highlighted the issue in a speech at the Chatham House Cyber 2023 conference this week, saying it is crucial to avoid designing systems that are vulnerable to attack.

The issue has significant implications for the development of AI in public services and defence among other sectors.

“We cannot rely on our ability to retrofit security into the technology in the years to come nor expect individual users to solely carry the burden of risk,” Cameron said. “We have to build in security as a core requirement as we develop the technology.

“Like our US counterparts and all of the Five Eyes security alliance, we advocate a ‘secure by design’ approach where vendors take more responsibility for embedding cyber security into their technologies, and their supply chains, from the outset. This will help society and organisations realise the benefits of AI advances but also help to build trust that AI is safe and secure to use.

“We know, from experience, that security can often be a secondary consideration when the pace of development is high.

“AI developers must predict possible attacks and identify ways to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems.”

Understanding threats

Cameron highlighted three relevant themes in the NCSC’s work, one being to help organisations understand the associated threats and how to mitigate against them. She pointed to how machine learning creates a new category of cyber threat in the form of adversarial attacks, under which data used in training the systems could be manipulated to influence behaviour.

Similarly, staff could inadvertently create vulnerabilities by submitting confidential information into the prompts for large language models (LLMs).

The second theme reflects the need to maximise the benefits of AI for cyber defence.

“AI has the potential to improve cyber security by dramatically increasing the timeliness and accuracy of threat detection and response,” Cameron said.

“And we need to remember that in addition to helping make our country safer, the AI cyber security sector also has huge economic potential.”

Thirdly, the NCSC is working on the need to understand how adversaries are using AI and how it could be possible to disrupt them. She related this to China’s efforts to make itself a world leader in the technology, and for LLMs to lower the barriers for entry for attacks by hostile states and cyber criminals.

Practical steps

“Amid the huge dystopian hype about the impact of AI, I think there is a danger that we miss the real, practical steps that we need to take to secure AI,” Cameron concluded. “This will not be easy – but it is worth the dramatic benefit that AI will bring to our economy and society.

“At the NCSC, we will be there to understand the cyber security threats we face in AI and will advise on how to increase our collective security.”

 

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.