Introduction

The White House has released the first National Security Memorandum (NSM) dedicated to the safe, secure [4] [5], and trustworthy development of artificial intelligence (AI) [4]. This memorandum underscores the profound implications of AI advancements for national security and foreign policy, aiming to position the United States as a leader in AI development while safeguarding democratic values and human rights.

Description

The White House has issued the first National Security Memorandum (NSM) focused on advancing the safe [4], secure [4] [5], and trustworthy development of artificial intelligence (AI) [4], emphasizing the significant implications of AI advancements for national security and foreign policy [5]. This initiative aims to ensure that the United States leads in AI development while upholding democratic values and protecting human rights, civil rights [3] [4], civil liberties [3] [4], and privacy [2] [4] [5]. The memorandum outlines a framework for national security agencies to adopt AI in alignment with these values [2], addressing risks such as privacy invasions [2] [5], bias [2], discrimination [2] [5], and potential human rights abuses.

Key actions outlined in the memorandum include enhancing the security and diversity of chip supply chains to guard against foreign espionage and theft, tracking and countering adversary developments and uses of AI [4], and prioritizing intelligence collection on competitors’ operations against the US AI sector [5]. The NSM designates the AI Safety Institute [5], under the Commerce Department [2], as the primary contact for industry collaboration with the government [5], granting it the authority to evaluate AI tools prior to their deployment to ensure they do not assist terrorist organizations or hostile nations [1]. Additionally, the memorandum promotes the National AI Research Resource to empower a diverse range of researchers in AI development [5].

The NSM establishes a Framework to Advance AI Governance and Risk Management in National Security [5], which outlines mechanisms for risk management [5], accountability [5], and transparency [5]. It emphasizes the necessity of maintaining human oversight in decision-making processes for AI tools [1], particularly those that could be used as targeting weapons [1]. The memorandum explicitly prohibits AI from making decisions related to granting asylum [1], tracking individuals based on ethnicity or religion [1], or designating someone as a “known terrorist” without human involvement [1]. It encourages streamlined procurement practices and collaboration with non-traditional vendors while prohibiting certain applications of AI that could infringe on civil rights or automate nuclear weapon deployment.

Internationally [4] [5], the NSM builds on recent progress in AI governance [5], including the development of an International Code of Conduct on AI and a Political Declaration on the Military Use of AI [5], signed by 56 nations [5]. The US Government is directed to work with allies to create a responsible governance framework for AI that adheres to international law and reflects democratic values [5]. This initiative builds on President Joe Biden’s Executive Order from October 2023 [4], which mandated federal agencies to establish AI usage policies and set new standards for AI safety and security, as well as the international Bletchley declaration on responsible AI development from November 2023 [4]. National security adviser Jake Sullivan has highlighted the transformative potential of AI for military operations [3], logistics [3], cyber defenses [3], and intelligence analysis [3], while also addressing concerns regarding lethal autonomous drones [3].

Despite the risks associated with AI tools, including the potential for inaccuracies in critical national security contexts, the government aims to facilitate experimentation with AI through pilot programs [2]. This policy is seen as essential for maintaining a competitive edge against other nations [3], particularly China [2] [3], although the effectiveness of the memorandum remains uncertain [1], as many of its deadlines will extend beyond Biden’s presidency [1]. The national security community is aware of the challenges posed by AI and is implementing a process for accrediting AI systems [2], alongside existing guidelines from the Defense Department and Intelligence Community [2]. Additionally, there are growing concerns about the handling of US citizens’ sensitive data [2], which can be exploited by adversaries [2], prompting an executive order to limit such access.

Conclusion

The National Security Memorandum represents a significant step in addressing the challenges and opportunities presented by AI in national security. By establishing a comprehensive framework for AI governance, the memorandum seeks to mitigate risks while promoting innovation and collaboration. As the US navigates the evolving landscape of AI, the focus remains on maintaining a competitive edge, safeguarding democratic values [5], and ensuring the responsible use of AI technologies. The long-term success of these initiatives will depend on continued vigilance, international cooperation, and the ability to adapt to emerging threats and opportunities.

References

[1] https://techcrunch.com/2024/10/24/new-white-house-memo-calls-for-agencies-to-protect-ai-from-foreign-adversaries/
[2] https://www.defenseone.com/policy/2024/10/white-house-signs-national-security-memo-ai/400512/
[3] https://apnews.com/article/artificial-intelligence-national-security-spy-agencies-abuses-a542119faf6c9f5e77c2e554463bff5a
[4] https://www.infosecurity-magazine.com/news/white-house-ai-national-security/
[5] https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/24/fact-sheet-biden-harris-administration-outlines-coordinated-approach-to-harness-power-of-ai-for-u-s-national-security/