13 core principles to strengthen AI cybersecurity

The new ETSI TS 104 223 specification for securing AI provides reliable and actionable cybersecurity guidance aimed at protecting end users. Adopting a whole-lifecycle approach, the framework outlines 13 core principles that expand into 72 detailed, trackable principles across five key phases of the AI lifecycle, all designed to enhance the overall security of AI systems.

ETSI TS 104 223

The specification details transparent, high-level principles and provisions for securing AI. It provides stakeholders in the AI supply chain—from developers and vendors to integrators and operators—with a robust set of baseline security requirements, helping to protect AI systems from evolving cyber threats.

AI presents unique challenges compared to traditional software, including risks such as data poisoning, model obfuscation, indirect prompt injection, and vulnerabilities tied to complex data management. In taking these differences into account, ETSI TS 104 223 offers targeted guidance that integrates established practises in cybersecurity and AI with novel approaches.

The specification was developed by the ETSI Technical Committee (TC) on Securing Artificial Intelligence (SAI), which includes representatives from international organisations, government bodies, and cybersecurity experts. This cross-disciplinary collaboration ensures that the requirements are both globally relevant and practically implementable.

In addition to the primary specification document, ETSI will also publish a practical implementation guide for Small Medium Enterprises (SMEs) and other stakeholders. This guide will include case studies across a variety of deployment environments to assist organisations in applying the security requirements effectively.

“In an era where cyber threats are growing in both volume and sophistication and negatively impacting organizations of every kind, it is vital that the design, development, deployment, and operation and maintenance of AI models is protected from malicious and unwanted inference,” said Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence. “Security must be a core requirement, not just in the development phase, but throughout the lifecycle of the system. This new specification will help do just that—not only in Europe, but around the world. This publication is a global first in setting a clear baseline for securing AI and sets TC SAI on the path to giving trust in the security of AI for all its stakeholders”.

Don't miss