Introduction
The rescission of the Biden Administration’s executive order on artificial intelligence by Donald Trump on January 20, 2025, signifies a pivotal change in the landscape of federal AI regulation and oversight. This shift reflects a transition from a structured regulatory framework aimed at ensuring safety and transparency to a more development-focused approach that emphasizes innovation and competitive advantage.
Description
The original executive order [8], established in October 2023 [2] [6], was designed to implement safety guidelines for generative AI and included critical standards for healthcare AI tools. It mandated that developers of large AI models, such as OpenAI’s GPT [3], disclose the results of safety tests to the federal government [3], particularly for systems with implications for national security, public health [1] [5], or the economy. Additionally, it required developers of AI systems posing significant risks to conduct safety tests and report findings before public release [1], aligning with the Defense Production Act [1].
The order outlined eight key priorities [4], including ensuring the safety of AI through standardized evaluations [4], investing in education and research [4], mitigating bias [4], and protecting privacy [4]. It directed the National Institute of Standards and Technology to create safety testing standards and instructed federal agencies to evaluate potential risks associated with AI technologies [3], particularly concerning cybersecurity and other threats. Furthermore, the order included provisions to protect workers and consumers [3], commissioning a report on AI’s impact on the labor market and directing agencies to develop strategies to combat AI-enabled fraud and discriminatory algorithms [3].
While the Biden Administration’s executive order sought to enhance safety measures for AI developers and provide guidance for businesses adopting AI technology, it did not impose penalties for non-compliance [2]. The administration’s significant contribution to AI governance was the Blueprint for an AI Bill of Rights [6], a 73-page document released in October 2022 [6], which greatly influenced various state and local laws regarding AI [6]. Despite the absence of comprehensive federal legislation on AI [6], states across the political spectrum are actively working to establish safeguards and enhance transparency within the AI industry [6].
In contrast, the Trump Administration’s approach appears to prioritize the development of new AI tools, reflecting a Republican advocacy for AI development that promotes free speech and human flourishing [1]. This shift towards a less regulated environment for AI development is intended to foster innovation and maintain the US’s competitive advantage [5]. However, the cancellation of the executive order has raised concerns about diminishing protections related to privacy, civil liberties [8], and safety concerning advanced AI systems [8]. Experts warn that the absence of uniform standards could exacerbate algorithmic bias [4], widening existing healthcare disparities and negatively affecting long-term health outcomes [4]. This regulatory uncertainty may also lead to conflicts with the European Union’s AI Act, which imposes transparency requirements and restricts certain AI applications [3]. Public trust in AI technologies remains a significant concern, as surveys indicate high levels of mistrust among Americans.
The revocation of the executive order has created regulatory uncertainty for companies in AI-driven sectors. Without a cohesive federal framework [7], businesses may face challenges such as inconsistent regulations from states and international bodies [7], heightened risks related to AI ethics and data privacy [7], and competitive disparities due to varying standards in AI development and deployment [7]. Although the rescinded executive order has been removed from the White House website, some regulatory measures from the Biden administration aimed at enhancing AI development in the US may remain intact [3]. These include new restrictions imposed by the US Commerce Department on AI chip and technology exports, as well as an executive order intended to support the energy needs of advanced AI data centers on federal land.
In response to this evolving landscape [7], companies are advised to take proactive steps [7]. Strengthening internal governance by developing or enhancing ethical guidelines for AI usage is crucial [7]. Organizations should also invest in compliance by staying informed about state [7], international [1] [3] [5] [7], and industry-specific regulations [7], aligning their practices with emerging standards like Colorado’s Artificial Intelligence Act and the EU’s AI Act [7]. Monitoring potential federal policy changes and legislative developments is essential [7], as these may indicate new directions in AI governance [7]. Engaging with industry groups and standards organizations can help influence voluntary guidelines and best practices [7]. Additionally, establishing robust risk management frameworks is vital to address issues such as bias [7], cybersecurity threats [5] [7], and liability concerns [7]. Analysts anticipate that the Trump Administration will adopt a more lenient regulatory approach towards AI [2], reflecting a broader shift in federal AI governance, which places greater responsibility on the private sector to ensure ethical and safe AI usage [7]. Companies must navigate this uncertain regulatory environment while continuing to innovate responsibly [7], remaining vigilant and adaptable to maintain their competitive edge and public trust [7].
Conclusion
The rescission of the executive order marks a significant shift in AI governance, moving from a structured regulatory framework to a more innovation-driven approach. This change presents both opportunities and challenges for AI development in the US. While it may foster innovation and maintain competitive advantage [5], it also raises concerns about privacy, civil liberties [8], and safety [7] [8]. Companies must adapt to this evolving regulatory landscape by strengthening internal governance, staying informed about regulations [7], and establishing robust risk management frameworks to ensure ethical and safe AI usage.
References
[1] https://www.usatoday.com/story/money/2025/01/20/trump-revokes-biden-executive-order-ai-risks/77842476007/
[2] https://www.ciodive.com/news/Trump-AI-executive-order-repeal-Biden-era-regulatory-overhaul/737883/
[3] https://www.theverge.com/2025/1/21/24348504/donald-trump-ai-safety-executive-order-rescind
[4] https://www.techtarget.com/HealthtechAnalytics/news/366618313/Trump-rescinds-Bidens-trustworthy-AI-development-order
[5] https://www.eweek.com/news/trump-revokes-biden-ai-eo/
[6] https://www.transparencycoalition.ai/news/on-day-one-trump-rescinds-biden-executive-order-on-ai
[7] https://natlawreview.com/article/2023-ai-executive-order-revoked
[8] https://www.mercurynews.com/2025/01/22/trump-rescinds-biden-ai-executive-order/




