Introduction
The UK Government Department for Science [5], Innovation and Technology (DSIT) has launched a voluntary Code of Practice (CoP) for the Cyber Security of AI [5]. This initiative [2] [3] [4], structured around 13 principles [5], aims to protect AI systems from unique cyber threats and enhance their resilience, thereby supporting the digital economy and establishing baseline security requirements for safe deployment across various sectors.
Description
The UK Government Department for Science [5], Innovation and Technology (DSIT) has introduced a voluntary Code of Practice (CoP) for the Cyber Security of AI [5], structured around 13 principles that delineate the responsibilities of various stakeholders throughout the AI system lifecycle [5], from planning to decommissioning [5]. This initiative is designed to protect AI systems from unique cyber threats such as data poisoning, model obfuscation [3], and other vulnerabilities identified in recent research, including inadequate security architecture and insufficient data privacy safeguards [1]. By enhancing the resilience of AI systems [3], the CoP aims to bolster the digital economy and establish baseline security requirements for safe deployment across various sectors, including public services and consumer applications [2].
In light of the increasing adoption of AI technologies, the code provides essential measures to safeguard these systems [2], particularly as many businesses have experienced cyber attacks in the past year. Developed through the European Telecommunications Standards Institute (ETSI) [1] [2] [3], this initiative reflects the UK’s commitment to creating a global standard for secure AI [2], reinforcing its strategic plan to integrate AI across various sectors of British industry [4]. The UK AI sector [2], which generated substantial revenue last year [2], will be better positioned to maintain growth while ensuring the protection of critical infrastructure [2]. The Minister for Cyber Security has emphasized the importance of this code in fostering a secure framework for AI development and deployment, allowing businesses to innovate confidently while protecting vital systems and data [3].
An implementation guide has been commissioned [2] [3], led by John Sotiropoulos [5], to assist developers in understanding the specific requirements for various AI systems [3]. This guide draws extensively from existing guidelines and publications from organizations such as the UK’s National Cyber Security Centre (NCSC), the Information Commissioner’s Office (ICO), the National Institute of Standards and Technology (NIST), MITRE [5], the Cloud Security Alliance (CSA), and the Open Web Application Security Project (OWASP). It emphasizes a shift in AI security focus from theoretical concepts to practical applications [5], addressing the needs of AI builders [5], defenders [5], decision-makers [5], and stakeholders for actionable guidance [5].
The guide outlines key steps for compliance with the CoP and covers essential topics such as Risk Management, Threat Modelling [5], Secure By Design development [5], and the relationship between security and Responsible AI [5]. It also addresses securing the AI supply chain [5], monitoring system behavior [4], testing AI solutions [5], vulnerability management [5], incident response handling [5], and compliant decommissioning practices [5]. The government highlights the transformative potential of AI in public services and economic recovery [2], and the CoP aims to bolster the resilience of AI systems against malicious attacks [2].
Additionally, the UK has initiated an International Coalition on Cyber Security Workforces to collaborate with other nations in addressing cyber threats and the global skills gap in cyber security [2]. Plans are underway to strengthen online defenses through a new Cyber Security and Resilience Bill [2], alongside a response to the Cyber Governance Code of Practice [2], which aims to improve understanding and management of cyber risks at the leadership level [2].
The Cyber Governance Code of Practice [2], developed in partnership with the NCSC and industry experts, provides actionable guidance for directors to effectively manage cyber risks [2]. This code will contribute to the development of a global standard within ETSI’s Securing AI Committee [2], promoting international alignment of security requirements for AI systems [2]. The government plans to update the CoP and implementation guide to align with future ETSI standards [2], ensuring that the UK remains at the forefront of secure AI innovation. The final Code of Practice for the Cyber Security of AI [1] [3] [5], published on 31 January 2025 [1], builds on existing guidelines and introduces considerations for the end-of-life phase of AI systems [1], identifying key stakeholders and providing example measures for various AI applications. Although currently voluntary [1], the Code aims to inform a new global standard through submission to ETSI [1], with potential recognition as a harmonised standard across the EU and beyond [1].
Conclusion
The introduction of the CoP for the Cyber Security of AI by the UK Government represents a significant step towards enhancing the security and resilience of AI systems. By establishing a framework for addressing unique cyber threats, the initiative not only supports the digital economy but also positions the UK as a leader in setting global standards for secure AI deployment. The ongoing efforts to align with international standards and address the global skills gap in cyber security further underscore the UK’s commitment to fostering a secure and innovative AI landscape.
References
[1] https://www.handleygill.co.uk/handley-gill-blog/artificial-intelligence-ai-cyber-security-code-of-practice
[2] https://www.gov.uk/government/news/world-leading-ai-cyber-security-standard-to-protect-digital-economy-and-deliver-plan-for-change
[3] https://www.computerweekly.com/news/366618702/Government-sets-out-cyber-security-practice-code-to-stoke-AI-growth
[4] https://www.techradar.com/pro/uk-government-releases-new-ai-code-of-practice-to-help-protect-companies
[5] https://genai.owasp.org/2025/01/31/owasp-ai-security-guidelines-offer-a-supporting-foundation-for-new-uk-government-ai-security-guidelines/