A new European standard outlines security requirements for AI
The European Telecommunications Standards Institute (ETSI) has released a new European Standard that addresses a growing concern for security teams working with AI. The standard, ETSI EN 304 223, sets baseline cybersecurity requirements for AI models and systems intended for real-world use.

Addressing security risks specific to AI
ETSI EN 304 223 treats AI as a distinct category of technology from a security perspective. AI systems introduce risks tied to their data pipelines, model behavior, and operational environments. These include data poisoning, model obfuscation, indirect prompt injection, and weaknesses linked to complex training and deployment practices.
ETSI EN 304 223 brings established cybersecurity practices together with measures designed for these AI-specific risks. The result is a structured set of requirements that security teams can apply to AI models and systems across their operational lifespan.
Lifecycle-based requirements
ETSI EN 304 223 defines 13 principles and requirements across five phases of the AI lifecycle:
- Secure design
- Secure development
- Secure deployment
- Secure maintenance
- Secure end of life
Each phase aligns with internationally recognized AI lifecycle models. References to related standards and publications appear at the start of each principle to support consistent implementation and alignment with existing guidance across the AI ecosystem.
Relevance across the AI supply chain
The scope of the standard covers AI systems that rely on deep neural networks, including generative AI. It targets systems intended for deployment in operational environments. Vendors, system integrators, and operators can use the standard as a shared baseline for AI security practices.
Development of ETSI EN 304 223 reflects input from international organizations, government bodies, and experts from the cybersecurity and AI communities. This collaborative approach supports applicability across multiple industries and deployment contexts.
“ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems,” said Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence. “At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that went into delivering this framework is the result of extensive collaboration and it means that organisations can have full confidence in AI systems that are resilient, trustworthy, and secure by design.”