Introduction
On October 24, 2024 [1] [2] [4], President Biden issued a National Security Memorandum (NSM) to enhance US leadership in artificial intelligence (AI) while ensuring its development aligns with democratic values and human rights. This initiative responds to the October 2023 AI Executive Order and outlines actions for federal agencies to maintain US leadership in AI, emphasizing intelligence collection [1] [2], cybersecurity [1] [2] [3] [4], and collaboration with the private sector.
Description
On October 24, 2024 [1] [2] [4], President Biden issued a National Security Memorandum (NSM) focused on enhancing US leadership in artificial intelligence (AI) while ensuring its safe and trustworthy development in alignment with democratic values and human rights. This memorandum [2] [4], a response to the October 2023 AI Executive Order [4], outlines both immediate and long-term actions for federal agencies [1] [2], emphasizing the importance of intelligence collection on competitors’ AI operations and the need for timely cybersecurity support for AI developers.
The NSM identifies three primary objectives for the national security community [4], centering on maintaining US leadership in advanced AI systems and retaining top AI talent [4]. It highlights the significant implications of AI advancements for national security and foreign policy [3], while also emphasizing collaboration between the public and private sectors to bolster national security interests [4], particularly in attracting global AI talent and facilitating the entry of skilled noncitizens into the US [4].
To support these objectives, the NSM designates the AI Safety Institute as the primary contact for US industry [3], facilitating partnerships with national security agencies [3]. It introduces governance and risk management guidelines for AI in national security [1], requiring agencies to monitor and mitigate risks related to privacy [3], bias [3], discrimination [3], and human rights abuses [1] [2] [3]. The accompanying National Security AI Governance Framework establishes four key pillars for AI governance within federal agencies [4], aiming to protect human rights [4], civil liberties [4], and privacy while ensuring accountability in military AI applications [4]. This framework applies to all AI systems, particularly those deemed high-risk or prohibited [4], and includes mechanisms for accountability [1], transparency [1] [2] [3], and risk management [1] [2] [3] [4]. Notably, it prohibits the use of AI for assigning emotions [2], evaluating trustworthiness [2], or inferring race [2], while also introducing a waiver process that allows chief AI officers to bypass certain risk management practices under specific conditions [4].
Furthermore, the NSM directs the Department of State to develop a strategy for promoting international AI governance norms that align with safe [2], secure [2] [3], and trustworthy AI [1] [2] [3] [4], as well as democratic values [2], through engagement with international organizations and competitors [2]. This initiative builds on international progress in AI governance [3], including the development of an International Code of Conduct on AI and a Political Declaration on the Military Use of AI [3], signed by 56 nations [3]. The US Government is tasked with collaborating with allies to create a responsible governance framework that adheres to international law and protects fundamental freedoms [3].
Congress is currently reviewing bipartisan legislation [1], the PREPARED for AI Act [1] [2], which aims to codify requirements from the NSM and the Framework [2], including the establishment of an AI risk classification system and a ban on the use of AI by federal agencies for assigning emotions [2], evaluating trustworthiness [2], or inferring race [2]. However, the act’s progress is uncertain [1], as it has been inactive since July [1], and legislative focus has shifted towards regulations on AI-generated election deepfakes [1]. This legislative effort is part of a broader strategy for responsible innovation by the Biden-Harris Administration [3], and the implementation of the NSM may also be influenced by the outcome of the upcoming US presidential election [4], as a new administration could alter the trajectory of AI policy [4].
Conclusion
The National Security Memorandum represents a significant step in positioning the United States as a leader in AI while safeguarding democratic principles and human rights. Its implementation could have profound implications for national security, international collaboration [1] [2], and legislative developments. The success of this initiative will depend on effective collaboration between government agencies, the private sector [4], and international partners [3], as well as the political landscape following the upcoming presidential election.
References
[1] https://www.mintz.com/insights-center/viewpoints/54731/2024-10-31-biden-administration-issues-national-security
[2] https://www.jdsupra.com/legalnews/the-biden-administration-issues-8804663/
[3] https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/24/fact-sheet-biden-harris-administration-outlines-coordinated-approach-to-harness-power-of-ai-for-u-s-national-security/
[4] https://www.csis.org/analysis/biden-administrations-national-security-memorandum-ai-explained