The US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) have collaborated to publish comprehensive global guidelines for securely developing and deploying AI systems.

Description

These guidelines [1] [2] [3] [4] [5] [6], developed by the NCSC and CISA, aim to ensure cybersecurity is integrated into the entire development process of AI systems [6]. They cover four key areas: secure design [4], secure development [1] [2] [3] [4] [6], secure deployment [1] [4] [6], and secure operation and maintenance [1] [4]. The guidelines were developed with input from industry experts, 21 other international agencies and ministries [5], and have been endorsed by 18 countries, including all of the G7 [3] [5].

The guidelines include principles such as understanding risks [6], threat modeling [6], securing the supply chain [4], generating appropriate documentation [4], protecting infrastructure and models against compromise [4], monitoring system behavior and data drift [4], and logging system inputs for audits and investigations [4]. Their purpose is to assist developers in making informed cybersecurity decisions throughout the development process of AI systems [3]. They emphasize the importance of security in ensuring safe and trustworthy AI, particularly in securing data and models from attackers [5]. The guidelines also promote a “secure by design” approach and serve as a call to action for organizations worldwide to enhance their cybersecurity posture and protect AI systems from evolving threats [2].

The guidelines are based on the NCSC’s Secure development and deployment guidance [6], NIST’s Secure Software Development Framework [6], and principles published by CISA and other international cyber agencies [6]. The document has been endorsed by tech giants like Amazon [1], Google [1], Microsoft [1], and OpenAI [1], as well as representatives from 17 other countries [1].

Conclusion

The release of these guidelines marks a significant step toward building a more secure and resilient digital future [2]. By integrating cybersecurity into the development process of AI systems [6], organizations can enhance their cybersecurity posture and protect against evolving threats [2]. The endorsement of these guidelines by industry leaders and international agencies highlights their importance and sets a precedent for global cybersecurity standards. As AI continues to advance, these guidelines will play a crucial role in ensuring the safe and trustworthy deployment of AI systems.

References

[1] https://www.computerweekly.com/news/366561153/NCSC-publishes-landmark-guidelines-on-AI-cyber-security
[2] https://www.secureworld.io/industry-news/cisa-ncsc-unveil-ai-guidelines
[3] https://www.dhs.gov/news/2023/11/26/dhscisa-and-uk-ncsc-release-joint-guidelines-secure-ai-system-development
[4] https://www.itpro.com/technology/artificial-intelligence/ncsc-announces-global-guidelines-on-ai-security
[5] https://www.infosecurity-magazine.com/news/uk-first-guidelines-ai-safety/
[6] https://www.forbes.com/sites/emmawoollacott/2023/11/27/us-uk-and-16-other-nations-agree-ai-security-guidelines/