Introduction
The incoming Trump administration is anticipated to adopt a deregulatory approach to artificial intelligence (AI) [5], emphasizing innovation [1], economic competitiveness [2] [6], and national security [2]. This shift suggests a focus on industry growth and minimal regulation, potentially impacting both domestic and international AI governance.
Description
The incoming Trump administration is expected to adopt a hands-off regulatory approach towards artificial intelligence (AI) [5], prioritizing innovation [1] [2], economic competitiveness [2] [6], and national security [2]. This shift includes plans to repeal the previous administration’s executive order that established new reporting requirements for advanced AI model developers [5], signaling a deregulatory stance aimed at accelerating industry growth to enhance US competitiveness on a global scale. Recent appointments within the administration [4], such as Andrew Ferguson as chairman of the FTC and David Sacks as the inaugural “AI Czar,” suggest a focus on scrutinizing Big Tech for issues related to censorship rather than regulatory overreach, aligning with traditional Republican deregulatory instincts [4]. Sacks will play a crucial role in balancing technological advancement with civil rights and privacy protections, emphasizing industry collaboration and self-regulation while prioritizing consumer-centric AI applications. His approach is anticipated to favor minimal regulation [3], promoting a pro-business stance that encourages economic growth and technological progress [3].
Additionally, the selection of Paul Atkins as SEC chair indicates a prioritization of rapid deployment and commercialization of technology [4]. Stakeholders should remain vigilant regarding state-level AI legislation [2], particularly in states like California and New York [2], which may address safety and ethical concerns [2]. During the previous administration [2] [3] [4] [5] [6], AI policy focused on bolstering the US’s competitive edge in innovation [2], exemplified by the American AI Initiative launched in 2019 [2]. Key initiatives included a blueprint for an AI Bill of Rights [2], an Executive Order promoting responsible AI development [2], and efforts to enhance accountability in AI systems through increased funding for research and development [2]. However, the anticipated reduction in regulatory oversight may coincide with cuts to AI research funding and limited federal legislation, leading states to take the lead in addressing AI-related issues [6].
The political landscape in the US is marked by a significant divide regarding AI regulation [1], with the current administration advocating for structured oversight while the former president promises to dismantle these policies if re-elected [1]. This tension creates uncertainty that could hinder both domestic and international efforts to establish cohesive AI safety standards [1]. At the Federal Communications Commission (FCC) [5], the expected leadership of Commissioner Brendan Carr suggests a halt to regulatory initiatives concerning AI [5], particularly in the context of political advertising [5]. Carr has expressed opposition to previous efforts aimed at regulating AI’s role in political speech [5], indicating that under his chairmanship [5], the FCC is unlikely to advance any orders related to these regulatory proposals [5].
Meanwhile, in 2024 [6], numerous states have introduced AI-related bills [6], with Colorado enacting comprehensive legislation against algorithmic discrimination [6]. In 2025 [6], state lawmakers may pursue either broad AI regulations or targeted laws addressing specific applications such as automated decision-making [6], deepfakes [1] [6], facial recognition [6], and AI chatbots [6]. The implications of this political uncertainty are vast [1], affecting investment in AI technology and increasing risks associated with misinformation and discrimination [1]. The US’s role as a leader in global AI governance is jeopardized [1], as other nations may seek to fill the leadership void created by perceived inconsistencies in US policy [1].
While federal initiatives on topics like Section 230 reform and children’s online protection may progress [6], the overall pace of federal AI regulation and data privacy laws is expected to decelerate due to the administration’s deregulatory approach [6]. This approach [4] [6], described as “techno-pragmatic nationalism,” aims to bolster the US position in the global tech economy while potentially undermining tech industries in the European Union and emerging markets [4]. Bipartisan efforts are underway to establish guiding principles and recommendations for AI policy [2], but a clear pathway for comprehensive federal legislation remains elusive [2]. Experts express concern that a lack of stable regulatory structures could lead to accelerated AI development in harmful directions [1], such as the proliferation of deepfakes and unethical surveillance practices [1]. The anticipated rollback of AI safeguards established under the previous administration raises concerns about governance and safety frameworks [4], as advancements in AI could occur without adequate oversight [4].
The future of federal data privacy protection will depend on the dynamics between Congress [6], the courts [1] [6], and the administration [2] [3] [4] [6], as state lawmakers continue to introduce measures focused on preventing discriminatory outcomes, protecting intellectual property [2], and addressing issues related to generative AI tools [2]. The potential for a policy overhaul under a future Trump administration raises significant challenges for international discussions on AI safety [1], where the US has historically played a leadership role [1]. The dichotomy in US policy approaches could deter other countries from committing to collaborative AI safety agreements [1], leading to a cautious ‘wait-and-see’ attitude that stalls progress [1]. Despite the political turmoil [1], some experts believe that foundational technical work on AI safety may continue across administrations [1], highlighting commonalities in regulatory approaches [1]. However, the overarching uncertainty induced by potential policy reversals could undermine international alliances and cooperative efforts aimed at establishing unified regulatory standards for AI technologies [1].
Conclusion
The anticipated deregulatory approach of the incoming Trump administration towards AI could significantly impact both domestic and international AI governance. While it may foster innovation and economic growth, it also raises concerns about the adequacy of safety and privacy protections. The political divide and potential policy reversals create uncertainty, potentially hindering the US’s leadership role in global AI safety discussions and affecting international collaboration on establishing unified regulatory standards.
References
[1] https://opentools.ai/news/trumps-ai-policy-plans-cast-shadow-over-us-safety-talks
[2] https://news.bloomberglaw.com/us-law-week/ai-policies-under-trump-to-contrast-with-state-regulatory-trends
[3] https://www.biometricupdate.com/202501/trumps-ai-czar-likely-to-emphasize-de-regulation-market-driven-solutions
[4] https://news.yahoo.com/news/trump-tech-appointees-point-deregulated-114513752.html
[5] https://www.jdsupra.com/legalnews/ai-regulation-under-president-elect-8647474/
[6] https://theconversation.com/tech-law-in-2025-a-look-ahead-at-ai-privacy-and-social-media-regulation-under-the-new-trump-administration-245425




