Introduction

The evolving regulatory landscape for artificial intelligence (AI) systems necessitates businesses to navigate complex legal frameworks to ensure compliance and mitigate risks. The EU AI Act [1] [2] [3] [4] [7] [8] [9] [10] [11], effective from August 1, 2024 [8], represents a significant development in harmonizing AI regulations across member states, while the United States sees varied state-level legislation. Organizations must adapt to these changes to manage compliance and litigation risks effectively.

Description

The classification of AI systems is influenced by various legal factors [6], particularly the laws applicable in the principal place of business or the state where operations are conducted [6]. As the regulatory landscape for artificial intelligence evolves rapidly at international [1] [2] [4] [6] [8] [10], national, and state levels [4], businesses operating across multiple jurisdictions must navigate these changing requirements to ensure compliance and mitigate risks [4]. The EU AI Act [1] [2] [3] [4] [7] [8] [9] [10] [11], which came into force on August 1, 2024 [4], and mandates compliance for relevant entities starting February 2, 2025 [6], is a significant development in this area. This comprehensive legal framework aims to harmonize the development, deployment [1] [3] [5] [7] [8] [10], and use of AI across member states, ensuring the free movement of AI-powered goods and services while addressing concerns about potential regulatory inconsistencies that could lead to market fragmentation.

The Act categorizes AI systems by risk levels [1] [3], employing a risk-based approach that considers the intensity and scope of risks. It imposes stricter obligations on high-risk applications [1] [7], such as those used in healthcare, critical infrastructure [3] [4], employment [3] [5] [7] [8] [10], law enforcement [3] [7], and credit scoring [3], to ensure transparency [1], accountability [1] [7], and safety [1]. Prohibited applications include systems used for social scoring and biometric categorization that infer protected personal attributes [5], with the ban on such applications effective from February 2, 2025 [5]. High-risk AI systems will face significant regulatory requirements primarily directed at developers [5], while deployers [5], such as employers [5], will have lesser obligations [5], including ensuring human oversight and appropriate system usage [5]. Additional guidelines for high-risk systems are expected by February 2, 2026 [5], with the main requirements taking effect on August 2, 2026 [5]. Lower-risk systems are subject to less stringent oversight, while certain AI use cases are prohibited [3]. The Act enhances the safety and transparency of permissible systems to address ethical and societal concerns [3], with key requirements including transparency in decision-making processes [7], thorough documentation [7], and adherence to data protection regulations [7], particularly the General Data Protection Regulation (GDPR) [7] [9]. High-risk AI models must meet stringent standards for transparency, human oversight [3] [4] [5], and accuracy [4], and businesses must prepare for compliance deadlines [7], with these systems required to meet standards within two years of enforcement [7]. Non-compliance can result in significant penalties [7] [8], including fines up to €35 million or 7% of total worldwide annual turnover for prohibited practices [8], and up to €15 million or 3% for other infringements [8].

Starting August 2, 2025 [2] [10], new provisions will further impact governance, intellectual property compliance [2], and systemic risk monitoring for General Purpose AI (GPAI) models. Organizations must integrate AI training into their workflows and facilitate cross-team sessions to meet the requirements of the EU AI Act [2]. It is essential to conduct thorough governance reviews and adjust internal practices accordingly [2], as well as perform detailed intellectual property assessments to ensure compliance with copyright and data sourcing laws [2]. Staying informed about systemic risk thresholds and regulatory adjustments is crucial [2], particularly regarding the release of GPAI codes of practice by the European Commission and industry associations [2]. The EU AI Act is a “living regulation,” necessitating continuous adaptation to evolving legal instruments [2], with additional compliance guidelines and harmonized AI standards expected by the end of 2025 [2].

In the United States [4], state legislatures are actively developing AI-related legislation [4], with hundreds of bills under consideration. Notably, the Colorado AI Act [4], effective February 2026 [4] [5], requires businesses utilizing high-risk automated decision-making systems to integrate its requirements into their compliance programs [4], including risk management protocols and consumer notifications [4]. Additionally, the California AI Transparency Act [4], effective January 1, 2026 [4], mandates that providers of generative AI systems disclose their use in consumer interactions and offer AI detection tools [4], with penalties for non-compliance. California’s AB 3030 [4], effective January 1, 2025 [4], further requires healthcare providers using generative AI for patient communications to disclose this use and provide contact information for human providers [4].

To mitigate potential legal risks [6], businesses should adopt best practices [6], such as conducting risk assessments of their AI systems [7], categorizing them by risk level [4] [7], and engaging with industry groups or organizations that provide guidance on AI and contribute to legislative advocacy [6]. Organizations should develop a compliance roadmap that documents training data [11], risk mitigation strategies [9] [11], and monitoring processes [11]. Engaging with AI compliance experts is essential to meet legal standards prior to audits [11], and investing in AI compliance consulting can help prevent costly penalties and protect reputational integrity [11]. Companies can utilize compliance checker tools to identify their category and corresponding obligations under the EU AI Act. This proactive approach can help reduce exposure to lawsuits and navigate the evolving legal landscape surrounding AI technologies [6]. Companies with ties to the EU market must prepare for the extensive compliance requirements of the EU AI Act [4], while those operating in the U.S. [4] should stay informed about the evolving state regulations [4]. As AI use and regulation continue to advance [4], businesses must proactively adapt to emerging requirements to manage compliance and litigation risks effectively [4]. Companies that align with the Act’s ethical guidelines may gain a competitive edge [7], enhancing their reputation and customer trust [7], while non-compliance poses risks to both financial standing and credibility [7], particularly in sectors where AI is critical [7].

To ensure compliance with the EU AI Act [9], organizations should assess applicability [3], conduct AI reviews, prepare necessary documentation [3], perform conformity assessments for high-risk systems [3], submit an EU Declaration of Conformity [3], and engage in post-market monitoring and reassessment [3]. Implementing compliance automation platforms can facilitate this process [3], allowing organizations to concentrate on critical tasks while cross-referencing existing controls with the Act to prevent redundant efforts [3]. Maintaining compliance with various frameworks [3], such as SOC 2 [3], ISO/IEC 42001 [2] [3], and Cyber Essentials [3], necessitates careful planning [3], resource allocation [3], and continuous monitoring [3] [4]. Organizations should document compliance activities [3], conduct regular training [3], and prepare for audits to ensure adherence to standards [3]. By prioritizing compliance readiness [2], businesses can mitigate risks [2], enhance their market position [2], and promote responsible AI innovation in the evolving landscape [2].

The EU AI Act provides a framework for businesses to navigate the risks associated with generative AI [10], facilitating innovation while ensuring compliance with emerging regulations [10]. Legal leaders are increasingly interested in how generative AI can enhance efficiency [10], yet they face challenges in reconciling practical applications with regulatory guidance [10]. The complexity of overlapping regulations across jurisdictions complicates compliance [10], particularly as AI introduces new issues across various corporate activities [10]. A balanced approach to compliance can enable innovation and foster proactive risk management in AI development [10], guiding responsible innovation while addressing concerns such as security, explainability [10], potential litigation [10], bias [9] [10], ethics [2] [3] [4] [7] [8] [9] [10], and data privacy [10].

Organizations utilizing AI systems that process sensitive personal data face significant challenges [9], particularly in the context of detecting and correcting bias [9]. The dual compliance requirements of the EU AI Act and the GDPR necessitate careful navigation, especially as AI increasingly handles sensitive data [9]. A key regulatory challenge arises from the tension between ensuring algorithmic fairness and protecting sensitive personal data [9]. Article 10(5) of the AI Act permits the processing of special categories of personal data when strictly necessary for bias detection and correction in high-risk AI systems [9]. However, this provision may conflict with the GDPR’s Article 9 [9], which generally prohibits such processing without explicit consent or a specific legal basis [9].

While Article 10(5) may be interpreted as creating a new legal basis for processing sensitive data for bias-related purposes [9], the lack of explicit exceptions in the GDPR complicates this interpretation [9]. Organizations must navigate this regulatory gap to ensure compliance with both the AI Act and GDPR [9], particularly as the AI Act acknowledges the GDPR’s supremacy in cases of conflict [9]. A nuanced interpretation of “substantial public interest” under GDPR Article 9(2)(g) may allow for processing under this exception [9], with the AI Act serving as a legal basis [9], though this requires consensus among supervisory authorities for legal certainty [9].

To address these complexities [9], organizations are advised to adopt a comprehensive approach that considers both regulatory frameworks [9]. This includes conducting thorough risk assessments [9], identifying high-risk classifications [9], determining the necessity of processing special categories of data for bias detection [9], and documenting decision-making processes and risk mitigation strategies [9]. Implementing strong technical and organizational measures is crucial when processing sensitive personal data through AI systems [9]. Organizations should prioritize cybersecurity [9], access controls [9], data minimization [9], and the prompt deletion of special category data after bias correction [9], while also exploring anonymization techniques [9].

A hybrid approach to legal bases for processing sensitive personal data may be beneficial [9], involving explicit consent where possible and leveraging the “substantial public interest” exception when consent is impractical [9]. Additionally, organizations should document that bias correction aligns with GDPR’s fair processing principle [9]. Maintaining detailed records of processing activities [9], necessity assessments [9], and legal basis analyses is essential [9]. Clear communication about data usage and the establishment of governance structures overseeing AI systems that process sensitive data will further support compliance efforts [9]. The current lack of regulatory clarity regarding the use of sensitive personal data for bias mitigation in AI poses challenges for organizations [9]. In anticipation of guidance from lawmakers and supervisory authorities [9], a proactive and well-documented approach to risk assessment [9], data minimization [9], and transparency is recommended [9].

Conclusion

The EU AI Act and various state-level regulations in the United States present both challenges and opportunities for businesses utilizing AI technologies. Compliance with these evolving legal frameworks is crucial to mitigate risks, avoid penalties [4], and maintain a competitive edge. By adopting proactive compliance strategies, organizations can navigate the complexities of AI regulation, promote responsible innovation [2] [10], and enhance their market position while addressing ethical and societal concerns.

References

[1] https://chambers.com/legal-trends/eu-ai-acts-goals
[2] https://digital.nemko.com/insights/navigating-the-eu-ai-act-in-2025
[3] https://www.vanta.com/resources/eu-ai-act-guide
[4] https://www.smithlaw.com/newsroom/publications/the-future-of-ai-compliance-preparing-for-new-global-and-state-laws
[5] https://www.wtwco.com/en-in/insights/2025/03/eu-comprehensive-ai-act-includes-obligations-for-employers
[6] https://www.jdsupra.com/legalnews/minimizing-product-liability-risks-1059181/
[7] https://priceofbusiness.com/the-global-implications-of-the-eu-ai-act-a-guide-for-business-leaders/
[8] https://www.lexology.com/library/detail.aspx?g=a1aa3da8-4ba8-4967-8f28-1981bae8bc2b
[9] https://news.bloomberglaw.com/us-law-week/gap-in-the-eus-rules-for-ai-requires-a-well-documented-approach
[10] https://news.bloomberglaw.com/us-law-week/eu-ai-act-provides-gcs-innovation-guideposts-not-barriers
[11] https://ai-advy.com/news/eu-ai-act-compliance-a-step-by-step-guide-for-businesses/