Introduction

The October 3, 2024 memorandum from the Office of Management and Budget (OMB) outlines comprehensive guidelines for federal agencies on the procurement of artificial intelligence (AI) systems [5]. This directive implements requirements from the AI Executive Order and the Advancing American AI Act [2], emphasizing responsible procurement practices [2], particularly for generative AI and AI-based biometric systems [2], while ensuring alignment with democratic values.

Description

The memorandum mandates that all federal agencies seeking to procure AI systems or services must adhere to new requirements. These broadly define AI systems to include any software or tools utilizing machine learning algorithms [2], excluding AI acquired for National Security Systems or incidental use by contractors [2]. By December 1, 2024 [1] [2], agencies are required to modify contracts to incorporate new acquisition practices for rights- and safety-impacting AI [2], including the disclosure of planned AI use in solicitations and ensuring vendor protections against unlawful use [2].

Agencies must appoint a Chief AI Officer (CAIO) and conduct due diligence when procuring AI tools [3] [5], maintaining a publicly accessible inventory of AI use cases [3] [5]. Special attention is given to identifying AI applications that may impact safety or civil rights [5], with mandated adherence to specific minimum practices for these applications [5]. This includes implementing procurement practices that protect civil rights and civil liberties [2], with a focus on privacy protections throughout the procurement process [2].

Transparency requirements must be included in contracts to ensure vendors provide sufficient information for assessing AI use claims [4], managing risks [2] [3] [4], and conducting impact assessments throughout the acquisition lifecycle [3] [4]. Agencies are tasked with ensuring that vendors supply necessary information to monitor AI performance and comply with risk management practices [2], with the level of required transparency determined by the agency based on associated risks [2]. Specific categories of information [2], such as performance metrics and training data [2], must be included in solicitations or contracts as needed for compliance [2].

Each agency must develop or update policies to facilitate collaboration [5], ensuring that appropriate controls are in place for AI acquisitions [3] [5]. Within 180 days [3] [5], CAIOs must report to OMB on their progress in formalizing cross-functional collaboration to manage AI performance and risks [5]. Policies should promote timely acquisition and proactive risk management [3], including expert involvement in decision-making and escalation processes for AI-related reviews [3].

The CAIO Council [5], in conjunction with OMB and other agencies [5], is tasked with sharing information on AI acquisition [5], including lessons learned [5], innovative practices [5], risk management mechanisms [5], and best practices for responsible AI procurement [5]. Agencies are directed to maintain and annually update an AI use case inventory [5], with best practices for vendor communication regarding AI use cases and requirements for vendors to provide adequate information for risk assessment and impact evaluations throughout the acquisition lifecycle [5].

For generative AI systems [4] [5], additional practices are mandated to mitigate risks [5], including contract stipulations that outputs are marked as AI-generated [4], documentation of training and evaluation processes [4] [5], and measures to prevent the generation of illegal or violent content [4] [5]. Agencies are encouraged to establish empirical standards for evaluating generative AI systems to ensure they select those that offer the most value [4].

Contractual language should be included to prevent vendor lock-in [5], requiring vendors to share knowledge [5], provide rights to code and data [5], and promote model and data portability along with pricing transparency [4] [5]. Emphasis on interoperability [4], data portability [4] [5], and transparency should be prioritized during market research [5], vendor solicitation [2] [4] [5], and evaluation [3] [4] [5].

In addition, a new national security memorandum emphasizes the need for significant changes in acquisition and procurement processes to facilitate the federal government’s adoption of AI technologies [6]. This initiative aims to align AI deployment with democratic values while safeguarding human rights [6], civil rights [2] [6], civil liberties [2] [6], and privacy [2] [6]. The memorandum outlines directives for various agencies [6], including the Department of Defense and the Office of the Director of National Intelligence [6], to form a working group focused on AI procurement issues [6]. Furthermore, it mandates the Department of Commerce to create a capability for voluntary unclassified pre-deployment safety testing of advanced AI models [6], addressing risks related to cybersecurity [6], biosecurity [6], and system autonomy [6].

Congress is currently considering the bipartisan PREPARED for AI Act, which would codify requirements for AI acquisition and use by federal agencies [5], including the appointment of CAIOs and the establishment of an AI risk classification system [5]. The bill aims to create a risk-mitigating framework for AI procurement and use [4] [5], while also prohibiting federal agencies from using AI to assign emotions [3], evaluate trustworthiness [3] [4], or infer race [3] [4]. However, its passage remains uncertain given the limited time remaining in the current congressional session [5], especially as focus shifts to other AI-related regulations [4]. Ongoing monitoring and analysis of these developments will continue [4], with support available for inquiries regarding current practices [4].

Conclusion

The OMB memorandum sets a robust framework for the responsible procurement of AI systems by federal agencies, emphasizing transparency [4] [5], risk management [2] [3] [5], and alignment with democratic values [6]. The directive’s implementation is expected to enhance the integrity and accountability of AI acquisitions, while ongoing legislative efforts, such as the PREPARED for AI Act, aim to further solidify these practices into law. The evolving landscape of AI regulation underscores the importance of continuous monitoring and adaptation to ensure ethical and effective AI deployment across federal agencies.

References

[1] https://cset.georgetown.edu/article/omb-issues-guidance-on-responsible-ai-acquisition/
[2] https://www.cov.com/news-and-insights/insights/2024/10/omb-releases-requirements-for-responsible-ai-procurement-by-federal-agencies
[3] https://www.lexology.com/library/detail.aspx?g=6e5d27ea-5bc7-4684-851a-4488d413da0c
[4] https://www.mintz.com/insights-center/viewpoints/54731/2024-10-24-omb-issues-additional-guidance-federal-agencies-their
[5] https://www.jdsupra.com/legalnews/omb-issues-additional-guidance-to-9443155/
[6] https://federalnewsnetwork.com/artificial-intelligence/2024/10/sweeping-ai-memo-directs-potential-acquisition-changes/