The introduction of the new standard, ETSI EN 304 223, marks a significant step in establishing cybersecurity protocols for artificial intelligence systems. This framework is the first European Norm dedicated to securing AI technologies, impacting markets beyond Europe. Approved by national standards bodies, it addresses the unique security challenges posed by AI, such as data poisoning and indirect prompt injection, which are not typical in conventional software.
ETSI EN 304 223 outlines thirteen principles that encompass the full lifecycle of AI development, from secure design to maintenance and end-of-life considerations. This comprehensive approach aligns with globally recognized AI lifecycle models, facilitating interoperability across various regulatory frameworks. The standard is aimed at a diverse range of stakeholders in the AI supply chain, including vendors and operators, and is relevant for technologies such as deep neural networks and generative AI.
The need for robust cybersecurity is increasingly critical as organizations integrate AI systems into their operations. As these technologies proliferate, concerns regarding their ethical use and security are heightened. To further assist organizations, an upcoming ETSI Technical Report 104 159 will delve into risks specific to generative AI, addressing issues like misinformation and intellectual property protection.