Introduction
On April 3, 2025 [3] [4] [9], the White House’s Office of Management and Budget (OMB) released two memoranda [4], M-25-21 and M-25-22 [4], establishing new guidelines for the governance, use [1] [2] [3] [4] [5] [6] [7] [8] [9], and acquisition of high-impact artificial intelligence (AI) within US federal agencies [3] [9]. These directives support Executive Order 14179 and aim to create a pro-innovation and pro-competition environment for AI development and application, replacing previous guidelines from the Biden administration [1] [8].
Description
On April 3, 2025 [3] [4] [9], the White House’s Office of Management and Budget (OMB) issued two memoranda [4], M-25-21 and M-25-22 [4], detailing new requirements for the governance [3], use [1] [2] [3] [4] [5] [6] [7] [8] [9], and acquisition of high-impact artificial intelligence (AI) within US federal agencies [3] [9], in support of Executive Order 14179. These memoranda aim to foster a pro-innovation and pro-competition environment in the development and use of AI systems [8], replacing previous guidelines from the Biden administration [1] [8]. High-impact AI is defined as AI that significantly influences decisions or actions with legal or material effects [3], particularly in sensitive areas such as human health and safety, and includes applications like biometric identification for one-to-many identification in publicly accessible spaces [1]. The memoranda apply to both new and existing AI developed, used [1] [2] [3] [4] [5] [6] [7] [8] [9], or acquired by covered agencies [5], focusing on system functionality reliant on AI rather than the entire information system [5], while explicitly excluding AI used in National Security Systems.
The first memorandum [9], M-25-21 [4], outlines how federal agencies can leverage AI to enhance operational efficiencies [9], improve decision-making [3] [9], automate tasks [9], and analyze large datasets [9]. Agencies are required to appoint a Chief AI Officer (CAIO) within 60 days to lead governance efforts and ensure compliance with risk management protocols [4]. The CAIO will serve as a senior advisor to agency heads, facilitating interagency coordination and has the authority to waive specific requirements under certain conditions, such as increased overall risks or impediments to critical operations [3]. Additionally, agency officials must provide documentation to the CAIO to counter the presumption of high impact for certain applications [1].
Agencies must develop and publicly post AI strategies within 180 days to eliminate barriers to AI use and create internal policies regarding generative AI within 270 days [9]. Pre-deployment testing is mandated to simulate real-world outcomes [3], helping to identify expected benefits and prepare for potential risks through risk mitigation plans [3]. Documented AI impact assessments are required [3], covering intended purposes [3], expected benefits [3], data analysis [3], potential impacts on privacy and civil rights [3], reassessment procedures [3], cost analysis [3], independent reviews [3] [8], and risk acceptance [3]. Agencies are urged to streamline AI adoption by minimizing unnecessary requirements [5], enhancing transparency [2] [4] [5], and optimizing resources [5].
Ongoing monitoring of AI performance is mandated to detect adverse impacts [3], with periodic human reviews to address unforeseen circumstances [3]. Operators of AI systems must receive adequate training and oversight [3], particularly in healthcare applications [3], which must include human oversight and fail-safe mechanisms to minimize risks of significant harm [3]. Agencies are tasked with identifying and removing barriers to AI adoption [9], and contractors can expect increased federal interest in American-made AI products [9]. They must also share any custom-developed AI code currently in use [7], which may impact proprietary information for contractors [9]. Public input on AI policies is encouraged [9], and contractors should monitor opportunities for feedback during rulemaking and public comment periods [7].
Acquisition efforts will prioritize AI technologies developed and produced in the United States [7], with agencies required to update acquisition procedures within 270 days [7] [9]. Guides for federal acquisition of AI will be released within 100 days [9], providing insights for contractors [9]. An internal best practices repository for AI acquisition will be developed within 200 days [9], standardizing contract clauses and pricing across agencies [7] [9], although it will not be publicly accessible. Agencies are directed to consider AI use by vendors and contractors in contract performance [7] [9], with caution against unsolicited AI use that may pose risks [9]. Agencies should include provisions in solicitations requiring vendors to disclose their use of AI in contract performance [6], as well as any unanticipated uses, ensuring transparency in AI deployment [2].
Furthermore, agencies are instructed to provide consistent remedies or appeals for individuals adversely affected by high-impact AI decisions [3], ensuring timely human review and minimizing administrative burdens [3]. Stakeholders in industries such as healthcare, managed care [3], and life sciences are advised to monitor developments at relevant federal agencies with regulatory authority over their operations [3], as the implications of the memoranda may introduce new contractual standards regarding product quality.
Agencies must assess the potential risks of high-impact AI during acquisition planning [6], identifying foreseeable use cases and determining if a system may involve high-impact applications [6]. Contracts for AI systems and services should include terms addressing intellectual property (IP) rights, privacy [3] [4] [5] [6] [8], vendor lock-in protections [6], compliance with risk management practices for high-impact AI [2] [3] [4] [5] [6], ongoing performance monitoring [6], vendor performance requirements [6], and notification of new AI enhancements [6]. Contractors should prepare for these terms to be included in contracts and renewals starting in early October [6].
If a contract for an AI system or service is not renewed [6], agencies are instructed to collaborate with vendors to implement contractual terms regarding ongoing rights and access to data or derived products [6]. It is essential for contractors to proactively negotiate clear terms regarding data format and usability to avoid disputes upon contract expiration [6]. The appendices of the relevant memos outline required government actions [6], including achieving full compliance within 180 days and developing processes for data ownership and IP rights within 200 days [6]. Additionally, the General Services Administration (GSA) is tasked with creating publicly available tools for AI system procurement within 100 days [6], including procurement playbooks and an online repository of resources to promote knowledge-sharing among agencies.
Overall, the memoranda emphasize the necessity of a robust AI governance framework that balances competitiveness with the risks associated with AI systems [8], encouraging procurement practices that foster competition and interoperability in the federal AI marketplace [1] [4]. Contractors are encouraged to align their governance models with these standards to build internal trust and mitigate regulatory risks [4], emphasizing the importance of investing in infrastructure that accommodates regulatory changes around AI [4]. Agencies are also required to establish governance boards within 90 days to ensure cross-functional oversight [5], including representation from IT [5], cybersecurity [1] [5] [8], data [1] [3] [4] [5] [6] [8], and budget stakeholders [5], and must annually inventory and publicly disclose their AI use cases [5], along with risk determinations and justifications for any waivers from minimum practices for high-impact AI [5].
Conclusion
The issuance of memoranda M-25-21 and M-25-22 marks a significant shift in the governance and acquisition of AI within US federal agencies. By establishing a comprehensive framework, these directives aim to enhance innovation and competition while addressing the risks associated with high-impact AI. The memoranda’s emphasis on transparency, risk management [2] [4] [5] [6] [8], and interagency coordination is expected to influence AI practices across various sectors, encouraging stakeholders to align with the new standards and prepare for the evolving regulatory landscape.
References
[1] https://www.pilieromazza.com/omb-issues-memoranda-on-use-and-acquisition-of-ai-by-federal-agencies-part-1-what-it-means-for-government-contractors/
[2] https://www.hunton.com/privacy-and-information-security-law/omb-issues-revised-policies-on-ai-use-and-procurement-by-federal-agencies
[3] https://www.winston.com/en/insights-news/white-house-memorandum-elaborates-on-prior-executive-order-with-requirements-for-high-impact-ai-used-by-federal-agencies
[4] https://nquiringminds.com/ai-legal-news/omb-issues-new-ai-governance-memoranda-for-federal-agencies/
[5] https://natlawreview.com/article/omb-issues-revised-policies-ai-use-and-procurement-federal-agencies
[6] https://www.pilieromazza.com/omb-issues-memoranda-on-use-and-acquisition-of-ai-by-federal-agencies-part-2-what-it-means-for-government-contractors/
[7] https://natlawreview.com/article/all-american-ai-new-omb-memos-set-priorities-federal-ai-use-and-acquisition
[8] https://natlawreview.com/article/new-federal-agency-policies-and-protocols-artificial-intelligence-utilization-and
[9] https://www.jdsupra.com/legalnews/all-american-ai-new-omb-memos-set-8174681/