Introduction
Recent developments across various US federal agencies highlight a dynamic shift in digital governance, focusing on artificial intelligence (AI) [3], data privacy [2], and agency operations [2]. These changes reflect a concerted effort to address emerging technologies, national security concerns [2], and evolving policy priorities [2].
Description
Recent developments across multiple US federal agencies indicate a rapidly evolving approach to digital governance [2], particularly concerning AI [2] [3], data privacy [2], and agency operations [2].
The Federal Trade Commission (FTC) has increased its focus on cross-border data transfers [2], particularly regarding the sale of sensitive personal data to foreign entities considered national security risks [2]. The FTC will utilize the Protecting Americans’ Data from Foreign Adversaries Act (PADFA) and an Executive Order enforced by the Department of Justice (DOJ) to enhance its enforcement capabilities [2]. In children’s privacy [2], the FTC has finalized revisions to the Children’s Online Privacy Protection Act (COPPA) Rule [2], which will require mandatory opt-in parental consent for third-party disclosures and impose limits on data retention [2]. This updated rule will take effect on June 23, 2025 [2], and may lead to future age-verification frameworks [2].
The FTC is also expected to focus on AI-related enforcement [2], particularly concerning training data [2], consent practices [2] [3], and market competition [2]. The agency will pay close attention to how data access and transparency influence competitive dynamics in the AI sector [2]. Significant personnel changes at the FTC have occurred [2], including the dismissal of two Democratic commissioners [2], which has sparked a legal dispute regarding presidential authority over independent agencies [2]. The case is currently under review [2], and the agency’s independence is being challenged [2]. The FTC has removed over 300 blog posts from the Biden administration [2], raising concerns about the clarity of its positions on data privacy and emerging technologies [2]. The confirmation of a new Republican commissioner has restored a quorum to the commission [2].
An enforcement action has been initiated against Workado [2], a vendor of AI-powered content detection tools [2], requiring the company to substantiate claims about the accuracy of its AI products [2]. The Consumer Financial Protection Bureau (CFPB) is facing scrutiny over a proposed plan to significantly reduce its workforce [2], which has led to legal challenges regarding executive authority and statutory protections [2].
PADFA [2], signed into law in April 2024 [2], prohibits data brokers from transferring sensitive personal data of US individuals to foreign adversaries [2]. The DOJ has implemented new rules to restrict the transfer of sensitive data [2], with compliance deadlines set for organizations [2]. Staff reductions at the National Institute of Standards and Technology have raised concerns about its future role in establishing AI safety and cybersecurity standards [2].
The Federal Communications Commission has established a Council on National Security to address risks in the telecommunications sector [2], focusing on supply chain vulnerabilities [2]. Federal agencies are directed to appoint Chief AI Officers (CAIOs) and adopt a risk-based approach to high-impact AI systems [2], ensuring alignment with privacy and civil rights protections [2]. A memorandum issued on April 3, 2025, outlines new requirements for implementing high-impact AI [1], defined as AI that significantly influences decisions or actions with legal or material effects [1], particularly in areas such as human health and safety [1]. Agencies must conduct pre-deployment testing to simulate real-world outcomes [1], complete documented AI impact assessments [1], and ensure ongoing monitoring of AI performance [1]. This includes periodic human reviews and adequate training for operators of AI systems, particularly in healthcare applications [1], which must incorporate fail-safe mechanisms to minimize risks of significant harm [1].
The White House’s Office of Management and Budget (OMB) has issued memoranda aimed at enhancing the governance and adoption of AI within federal agencies [3], promoting a pro-innovation and pro-competition approach to accelerate AI integration by reducing bureaucratic barriers [3]. Key initiatives include promoting competition in federal AI procurement [3], building an AI-ready workforce [3], and maintaining annual inventories of AI use cases [3]. Agencies must publish strategies to eliminate barriers to AI use within 180 days and convene governance boards within 90 days [3]. High-impact AI applications [1] [3], such as biometric identification in public spaces [3], require documentation to counter the presumption of high-impact use [3]. Agencies must implement tailored risk management practices for high-impact AI use cases [3], including pre-deployment testing and AI impact assessments [3], with a deadline for discontinuing non-compliant systems set for April 3, 2026 [3].
The guidance emphasizes responsible AI governance [3], aligning with state AI legislation applicable to private companies [3]. While recognizing the need for intellectual property protections [3], the memos stress interoperability [3], avoidance of vendor lock-in [3], and data sharing [3], prompting contractors to adjust licensing agreements [3]. A new category of “high-impact” AI necessitates additional risk management practices [3], including ongoing monitoring and training [3]. Contracts for AI systems must include provisions for regular performance evaluations and risk assessments [3].
Federal agencies are directed to update their IT policies and infrastructure within 270 days [3], which may impact existing contracts [3]. M-25-21 aims to guide responsible AI adoption while safeguarding privacy [3], civil rights [1] [2] [3], and civil liberties [3]. Contractors must publicly release compliance plans within 180 days and develop a Generative AI policy by early 2026 [3], with annual AI use case inventories required until 2036 [3]. Plans are in place for OMB to publish procurement playbooks [3], and the General Services Administration (GSA) will provide guides for the federal acquisition workforce [3]. An online repository of tools and resources for responsible AI procurement will be developed to promote knowledge-sharing among agencies [3].
Conclusion
These developments signify a substantial realignment across federal agencies in response to technological advancements [2], national security issues [2] [3], and shifting policy priorities [2]. AI governance and data protection remain pivotal areas of focus [2]. As the government establishes standards for AI accountability [3], private companies are encouraged to align their governance models with these standards to mitigate future regulatory risks [3]. Investing in infrastructure that accommodates regulatory changes around AI is essential for businesses to scale AI confidently and responsibly [3].
References
[1] https://www.winston.com/en/insights-news/white-house-memorandum-elaborates-on-prior-executive-order-with-requirements-for-high-impact-ai-used-by-federal-agencies
[2] https://www.jdsupra.com/legalnews/trump-2-0-tech-policy-rundown-100-days-6559442/
[3] https://nquiringminds.com/ai-legal-news/omb-issues-new-ai-governance-memoranda-for-federal-agencies/