Skip to the content

ETSI publishes principles for cyber security of AI

28/04/25

Mark Say Managing Editor

Get UKAuthority News

Share

AI cyber shield symbol over keyboard
Image source: istock.com/Ktstock

European standards organisation ETSI has published a technical specification for securing AI systems against cyber threats.

It is labelled ETSI TS 104 2223 – Securing artificial intelligence; baseline cyber security requirements for AI models and systems and provides guidance for protecting end users.

This consists of 13 core principles, 72 trackable principles and five lifecycle phases – secure design, development, deployment, maintenance and end of life.

Among the core principles are to evaluate threats and manage risks to AI systems, to identify, track and protect assets, and to enable human responsibility for the systems.

The specification was developed by the ETSI technical committee for securing artificial intelligence, which includes representatives from international organisations, government bodies and cybersecurity experts.

Vital protection

Committee chair Scott Cadzow said: “In an era where cyber threats are growing in both volume and sophistication and negatively impacting organizations of every kind, it is vital that the design, development, deployment, and operation and maintenance of AI models is protected from malicious and unwanted inference.

“Security must be a core requirement, not just in the development phase, but throughout the lifecycle of the system. This new specification will help do just that—not only in Europe, but around the world.

“This publication is a global first in setting a clear baseline for securing AI and sets (the technical committee) on the path to giving trust in the security of AI for all its stakeholders”. 

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.