Introduction
The UK government, through the Department for Science, Innovation and Technology (DSIT) [3] [13], has introduced an updated voluntary AI Cyber Security Code of Practice (CoP) as of January 31, 2025. This initiative aims to enhance the security and resilience of AI systems and organizations by establishing baseline cybersecurity principles. Developed in collaboration with Kainos [7], the Code seeks to assist various sectors in safely leveraging AI’s transformative potential, thereby supporting economic growth and improving public services [11].
Description
The revised Code outlines essential tools for securing AI systems against unique cybersecurity risks, such as hacking [4] [5], sabotage [2] [4] [11], data poisoning [1] [8], model obfuscation [1] [8], and indirect prompt injection [8]. It emphasizes the importance of integrating security throughout the entire AI lifecycle [10], covering secure design, development [1] [2] [3] [6] [7] [8] [10] [11] [12] [13], deployment [1] [2] [7] [8] [10] [11], maintenance [7] [8] [10], and end-of-life processes [7] [10]. By adhering to these principles [5], businesses can strengthen their cyber defenses and foster an environment conducive to AI innovation in the UK and globally.
In response to a rise in cyberattacks [9], with nearly half of organizations reporting breaches in the past year [9], the initiative seeks to establish a global standard for securing AI technology in collaboration with the European Telecommunications Standards Institute (ETSI) [7]. The updated Code will serve as the foundation for the new ETSI standard TS 104 223, alongside an accompanying implementation guide [6] [12], TR 104 128 [6] [12]. This effort reinforces the UK’s leadership in safe innovation and aligns with the government’s AI Adoption Opportunities Plan. The Code and its implementation guide will be continuously updated to align with forthcoming ETSI guidelines.
Strong support from stakeholders has been evident, with 80% of respondents endorsing the initiative during a public consultation [8]. The Code builds on existing guidelines from organizations such as the National Cyber Security Centre (NCSC), the Information Commissioner’s Office (ICO), NIST [13], MITRE [13], CSA [13], and OWASP [13], providing clarity on baseline security requirements for the AI supply chain [8]. It outlines specific cybersecurity requirements for AI systems [10], including generative AI [10], and offers guidance on implementing cybersecurity training [10], developing recovery plans for cyber incidents [4] [10], and conducting thorough risk assessments [10].
Accompanying the Code is an implementation guide, developed by John Sotiropoulos [12] [13], Head of AI Security at Kainos [13], in collaboration with various stakeholders and reviewed by officials from DSIT and NCSC. This guide details how organizations can apply the Code’s 13 principles across various scenarios [3], emphasizing practical and actionable risk-based measures for securely implementing AI capabilities [3]. It includes practical steps and example controls for each principle, which emphasize designing AI systems with security in mind [1], ensuring human responsibility [1] [8], and securing software supply chains [1]. The principles aim to raise awareness of AI security threats [8], protect assets [8], secure infrastructure and supply chains [8], document data and models [8], conduct thorough risk assessments [4] [10], maintain security updates [8], monitor system behavior [5] [8], and ensure proper data disposal [8].
The implementation guide offers practical guidance on key aspects of the secure AI lifecycle [13], such as risk management [13], threat modeling [13], and the importance of awareness regarding the evolving AI threat landscape [13]. It emphasizes secure-by-design development [7], the documentation and protection of AI assets [13], and non-technical controls like human oversight [13]. Additionally, it addresses securing the AI supply chain [13], testing AI solutions [13], and incident response handling [13], while recognizing that security measures should be context-dependent and proportional to organizational needs [13].
Furthermore, the UK has initiated the International Coalition on Cyber Security Workforces (ICCSW) with partners like Japan [4], Singapore [1] [4], and Canada [1] [4], aiming to address the global cyber skills shortage [1] [4], promote diversity [4], and enhance international cooperation on cybersecurity [4]. Strengthening cyber skills is projected to significantly benefit the UK cybersecurity industry [4], fostering a secure and inclusive digital workforce while ensuring that the potential of AI is harnessed securely to avoid vulnerabilities and cyber risks. The anticipated outcomes of the Code’s implementation include enhanced cyber resilience across industries [6], a reduction in the risk of data breaches and cyber attacks [6], and increased trust among consumers and stakeholders [6], particularly in sectors that heavily utilize AI technologies [6], such as healthcare [6] [13], finance [6], transportation [6], and education [6]. This initiative is part of broader government efforts to secure connected technologies [6], including 5G networks and IoT devices [6], to foster a safe and trustworthy digital environment for citizens [6].
Conclusion
The updated AI Cyber Security Code of Practice represents a significant step towards enhancing the security and resilience of AI systems in the UK and beyond. By establishing a global standard and providing comprehensive guidance, the initiative aims to mitigate cybersecurity risks and foster innovation. The collaboration with international partners and stakeholders underscores the importance of a unified approach to addressing the evolving threat landscape. As the Code and its implementation guide continue to evolve, they will play a crucial role in shaping a secure and trustworthy digital future, benefiting both the AI and cybersecurity industries.
References
[1] https://www.computerweekly.com/news/366618702/Government-sets-out-cyber-security-practice-code-to-stoke-AI-growth
[2] https://www.breaking.co.uk/uk-government/world-leading-ai-cyber-security-standard-to-protect-digital-economy-and-deliver-plan-for-change-23873.html
[3] https://www.kainos.com/insights/news/kainos-leads-on-creating-ai-cyber-security-guidance
[4] https://www.openaccessgovernment.org/uks-new-cyber-security-code-of-practice-to-protect-ai-use/188235/
[5] https://www.techradar.com/pro/uk-government-releases-new-ai-code-of-practice-to-help-protect-companies
[6] https://globalregulatoryinsights.com/art/ai-cyber-security-code-of-practice/
[7] https://www.infosecurity-magazine.com/news/uk-announces-worldfirst-ai-standard/
[8] [https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice](https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice)/code-of-practice-for-the-cyber-security-of-ai
[9] https://www.newsminimalist.com/articles/uk-government-unveils-ai-code-of-practice-to-boost-cybersecurity-for-businesses-8bc4f089
[10] https://www.techuk.org/resource/government-responds-to-call-for-views-on-the-cyber-security-of-artificial-intelligence-and-publishes-voluntary-code-of-practice.html
[11] https://www.miragenews.com/uk-sets-ai-cybersecurity-standard-to-safeguard-1400606/
[12] https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice
[13] https://genai.owasp.org/2025/01/31/owasp-ai-security-guidelines-offer-a-supporting-foundation-for-new-uk-government-ai-security-guidelines/