Introduction

The European Union’s Artificial Intelligence Act (AI Act) [1] [4] [5] [7], enacted in August 2024 and effective in 2025, establishes a comprehensive regulatory framework for AI systems with global implications [1] [5] [7]. This legislation mandates compliance from organizations worldwide, including those in the US [1], and emphasizes the development of human-centric, trustworthy [1] [2] [4] [5], and safe AI systems [4] [5] [7]. The Act’s extraterritorial reach and risk-based approach categorize AI systems into various risk levels, imposing specific obligations on each category. This regulatory framework aims to provide predictability for AI investments [6], particularly benefiting small and medium-sized enterprises (SMEs) within the EU [6].

Description

Organizations in the US [10], including those based in New York [10], must comply with the European Union’s Artificial Intelligence Act (AI Act), which was enacted in August 2024 and will take effect in 2025. This legislation establishes a comprehensive regulatory framework for AI systems with a global impact [1] [5] [7], emphasizing the need for AI to be human-centric [5], trustworthy [1] [2] [4] [5], and safe [1] [4] [5] [9], particularly concerning health [2] [5] [8], safety [1] [2] [5] [7], and fundamental human rights [1] [5]. The Act promotes trustworthy AI by enforcing safety [2], transparency [1] [2] [3] [4] [5] [7] [8] [10], and accountability standards while emphasizing human oversight and prohibiting technologies that threaten fundamental rights [2]. It has extraterritorial reach [2] [7], affecting any AI system used in the EU [2], regardless of the provider’s location [2].

The AI Act aims to provide long-term predictability for AI investments within the EU, creating a secure environment for AI development that particularly benefits small and medium-sized enterprises (SMEs) by offering a clear legal framework for business stability and planning [6]. It employs a risk-based approach [1] [5], categorizing AI systems into four risk levels: unacceptable [1] [7], high [1] [5] [7], limited [1] [5] [7] [9] [10], and minimal [4] [9] [10], each with corresponding obligations [1] [5]. Unacceptable risk systems [1] [2] [10], such as real-time biometric surveillance [1] [7] [10], are prohibited [7], while high-risk systems [1] [2] [4] [10], particularly those used in critical sectors like healthcare, education [1] [2] [5] [10], employment [1] [2] [3], and financial services [1] [5] [10], must adhere to strict requirements regarding risk management [1] [4] [10], transparency [1] [2] [3] [4] [5] [7] [8] [10], and human oversight [1] [2] [3] [4] [7] [10], including conducting conformity assessments and providing technical documentation before deployment [2].

High-risk applications [3], including credit decisioning and fraud detection [3], face stringent compliance requirements [3], necessitating continuous risk assessments [3], comprehensive data governance [3], and clear explanations of decision-making processes to mitigate potential liabilities [3]. Limited-risk systems [2] [4], such as chatbots and virtual assistants [4], are subject to transparency obligations [1] [4] [10], including user disclosures when interacting with AI-generated content and providing options for escalation to human agents. Minimal risk systems [2] [4] [10], like spam filters [1], face few obligations [1] [2] [10]. Compliance is mandatory for both EU-based organizations and non-EU entities, including US companies, if their AI systems or outputs are utilized within the EU [1], even if those systems are hosted outside the EU. Key compliance dates include the ban on unacceptable risk systems effective February 2, 2025 [10], and transparency rules for general-purpose AI systems taking effect on August 2, 2025 [10]. High-risk AI systems must comply with core obligations by August 2, 2026 [1] [10], and providers of general-purpose AI models marketed before August 2, 2025 [1] [10], must achieve compliance by August 2, 2027 [10].

Sectors such as healthcare [1] [5] [10], manufacturing [1] [5] [10], financial services [1] [4] [5] [10], and education are particularly impacted [1] [10]. US companies in these fields must audit their AI systems for fairness and bias [1] [10], implement risk controls [1] [10], and maintain documentation to access the EU market [1] [10]. For instance [8], a healthtech company utilizing AI diagnostic tools may comply with US laws but could face compliance issues under the EU AI Act if its tools are classified as high-risk [8]. Organizations should inventory their AI tools [10], cease any practices that fall into the unacceptable risk category [10], and prepare for compliance by developing internal governance frameworks [10], collecting necessary documentation [1], training their teams on the AI Act [1], and staying informed about EU guidance [1] [10]. This includes creating “Model Safety Data Sheets” that outline the purpose [7], training data [3] [4] [7], and risks associated with each AI tool [7].

In the absence of unified federal AI legislation in the US [8], companies are navigating a complex regulatory landscape that varies by state [8], similar to the early days of privacy laws [8]. States like California are at the forefront of AI regulation, enacting laws that reflect a similar risk-based approach to the EU AI Act. The decentralized nature of US AI regulation, characterized by a mix of federal guidelines [9], executive orders [9], and state laws [9], complicates compliance but also fosters innovation [9]. The National Artificial Intelligence Initiative Act (NAII) serves as a foundation for coordinated AI research and governance in the US [9]. The EU AI Act is expected to influence US regulations [10], with legal experts predicting a fragmented landscape of state-level regulations unless federal guidelines are established [7]. US businesses that align with the AI Act’s principles will be better positioned to comply with future domestic laws and maintain competitive access to the global market [10]. Early action is crucial to avoid steep penalties for noncompliance [10], which can reach up to €35 million or 7% of global annual turnover for violations. The implementation process is anticipated to resemble the experience of the GDPR [7], characterized by initial confusion and subsequent adaptation [7].

While the EU AI Act does not directly affect American consumers [7], it is likely to lead to improvements in AI practices among multinational tech firms [7]. As AI regulation continues to develop in an uncoordinated manner [8], the impact of these changes will depend on the speed at which companies adapt to the demands for ethical [7], explainable [2] [7], and transparent AI [7], particularly in high-risk areas such as hiring [7], credit [3] [7], and public safety [7]. Legal and compliance teams must remain agile [8], employing a tailored [10], jurisdiction-specific strategy to mitigate potential enforcement risks. Entities utilizing AI must assess their current applications and classify associated risks according to the EU AI Act [6], factoring in anticipated regulatory and compliance costs in their business strategies [6]. A thorough evaluation of current processes and best practices is necessary to understand the complexity of achieving compliance [6], with a focus on MLOps [6], cybersecurity [6], and ethical standards to determine whether minor adjustments or significant restructuring is required [6].

Investors are encouraged to proactively engage with their portfolio companies to anticipate and evaluate future regulatory compliance costs [6]. For new investments [6], it is essential to select a Product and Technology Due Diligence provider capable of assessing the potential impacts of AI regulations [6]. Monitoring these developments is critical for comprehensive risk identification and to provide strategic recommendations [6]. As critical enforcement dates approach [2], the focus is on compliance [2], transparency [1] [2] [3] [4] [5] [7] [8] [10], and coordination [2], requiring organizations to stay informed and adapt swiftly to the evolving legal landscape [2].

Conclusion

The EU AI Act represents a significant shift in the regulatory landscape for AI systems, with far-reaching implications for global businesses, including those in the US [1]. By establishing a comprehensive framework that emphasizes safety, transparency [1] [2] [3] [4] [5] [7] [8] [10], and accountability [2] [5], the Act aims to foster a secure environment for AI development and investment. Organizations must proactively adapt to these regulations to maintain market access and avoid substantial penalties. As AI regulation continues to evolve [8], businesses that align with the principles of the AI Act will be better positioned to navigate future domestic and international compliance challenges, ultimately contributing to the development of ethical and trustworthy AI systems.

References

[1] https://www.lawfirmalliance.org/news-insights-events/the-eu-ai-act-what-u-s-companies-need-to-know
[2] https://atvais.com/understanding-the-eu-ai-act-2025/
[3] https://www.garp.org/risk-intelligence/artificial-intelligence/eu-ai-act-250606
[4] https://gettalkative.com/info/eu-ai-act-compliance-and-chatbots
[5] https://www.bsk.com/news-events-videos/the-eu-ai-act-what-u-s-companies-need-to-know
[6] https://www.techminers.com/knowledge/europes-ai-act-2024-what-companies-and-investors-should-know
[7] https://ypredict.ai/news/what-the-eu-ai-act-means-for-us-businesses-and-your-privacy/
[8] https://sandlineglobal.com/ai-compliance-and-strategy-in-a-fragmented-landscape/
[9] https://copyleaks.com/blog/ai-regulations
[10] https://www.jdsupra.com/legalnews/the-eu-ai-act-what-u-s-companies-need-6444348/