Introduction

The Artificial Intelligence Act (AI Act) [1] [3], effective from August 1, 2024 [1] [3] [6], establishes a comprehensive legal framework for regulating AI systems [1] [3], focusing on ensuring their trustworthiness [1], safety [1] [3], and respect for fundamental rights [1]. This framework applies to both EU-based providers and those outside the EU that impact the EU market. The Act employs a risk-based approach [5], categorizing AI systems by risk level and tailoring rules according to the intensity and scope of associated risks. It prohibits certain unacceptable practices while imposing specific requirements for high-risk AI systems and general-purpose AI models [5].

Description

High-risk AI systems [1] [2] [3] [4] [5] [6] [7], particularly those used in critical sectors such as healthcare, law enforcement [6], and biometric verification technologies [7], are subject to stringent requirements due to their significant impact on individuals’ lives [7]. These requirements include comprehensive risk assessments, robust documentation [2] [3] [4] [5] [6], transparency in AI decision-making [5] [7], and strict quality management systems to ensure reliability and non-discriminatory outcomes. Developers must conduct extensive testing and validation to demonstrate the accuracy, robustness [1] [3], and cybersecurity of their AI solutions throughout their lifecycle, ensuring resilience against errors and unauthorized exploitation [1]. Providers are required to conduct thorough risk assessments prior to market introduction [1] [3], documenting potential risks to health [3], safety [1] [3], and fundamental rights [1] [3], as well as cybersecurity risks and mitigation strategies [3]. This documentation must be regularly updated and made accessible to competent authorities upon request [3], including detailed adjustments in software documentation that cover design choices, training data [2] [4] [6], algorithmic logic [4], performance metrics [2] [4], potential biases [1] [3] [4] [7], and risk mitigation strategies [1] [4]. Additionally, high-risk AI systems must ensure human oversight, provide clear explanations of their operations [7], and be registered in an EU database [6].

For general-purpose AI models with systemic risks [5], providers must notify the EU Commission if their models meet high-impact capability thresholds and prepare detailed technical documentation [5]. Companies developing high-impact General Purpose AI (GPAI) models are mandated to conduct thorough model evaluations [2], implement cybersecurity measures [2], and report on energy consumption and training data usage [2]. High-impact capabilities are defined by criteria such as model parameters [5], data set quality [5], computational resources [5], and evaluation benchmarks [5]. Transparency is mandated to foster public trust and prevent misuse [5], requiring providers to inform individuals about AI interactions [5], maintain thorough documentation [5] [6], and adhere to logging practices [5]. High-risk AI systems face stricter transparency obligations [5], including the marking of synthetic content to combat misinformation and ensure clarity in AI decision-making [5].

The Cyber Resilience Act (CRA) complements the AI Act by imposing cybersecurity requirements on connected products and software [3], including those with AI models [3]. Compliance with the CRA’s security-by-design requirements will also satisfy the cybersecurity obligations of the AI Act for high-risk devices [3]. Organizations are encouraged to adopt a risk-based [3], security-by-design approach [1] [3], integrating security measures into the development process and ensuring that default settings prioritize security [1] [3].

Initial compliance measures under the AI Act include prohibitions on harmful AI applications and obligations for AI literacy. By August 2026 [6] [7], compliance with the AI Act’s standards on transparency [7], security [1] [2] [3] [6] [7], and risk management will be mandatory for all high-risk AI systems [7]. Regular risk assessments and adherence to best practices are essential for maintaining cybersecurity [1] [3], and organizations are advised to conduct compliance audits and seek external consultancy for specific compliance issues [6]. Investment in cybersecurity and AI governance is recommended [6], alongside employee training to ensure understanding of the new regulatory landscape [6].

Non-compliance with the AI Act can result in significant penalties, with fines reaching up to €35 million or 7% of total worldwide annual turnover for prohibited practices [5], and up to €15 million or 3% for other infringements [5], with considerations for small and medium enterprises in the penalty structure [5]. The AI Act’s cybersecurity requirements will influence global standards for AI systems [3], emphasizing the need for security in systems that process data or interact with users [3]. Cybersecurity is critical not only for compliance but also for maintaining trust [3], reputation [1] [3] [6] [7], and competitiveness [1] [3], as cyberattacks can severely impact data confidentiality [3], integrity [1] [3] [5], and availability [1] [3], undermining user trust and market position [3]. Adopting a compliance-first mindset is crucial to safeguard customer trust and corporate reputation, and collaboration with authorities like the European Union Agency for Cybersecurity (ENISA) is recommended for guidance on cybersecurity policy and AI-related issues [3].

The introduction of the AI Act necessitates significant changes in AI documentation and compliance practices [4]. Organizations developing high-risk AI systems must produce technical documentation that aligns with the requirements outlined in Annex IV of the Act [4]. This shift in documentation practices promotes transparency, accountability [4], and ethical approaches with a focus on risk mitigation [4], serving as a risk-reduction mechanism [4]. Methodologies for drafting AI Act-compliant technical documentation have been proposed [4], and the exploration of automated tools [4], such as the DoXpert system [4], has shown potential in identifying missing information and aligning with expert compliance opinions [4]. This underscores the value of integrating advanced AI tools in the compliance review process while maintaining human oversight [4], potentially reducing the need for frequent legal consultations and associated costs [4].

Conclusion

The AI Act represents a significant shift in the regulatory landscape for AI systems, emphasizing the importance of trust, safety [1] [3], and fundamental rights [1] [3]. Its implementation will require organizations to adopt rigorous compliance measures, particularly for high-risk AI systems [1] [4] [7], and to integrate cybersecurity and transparency into their operations. The Act’s influence is expected to extend beyond the EU, setting a precedent for global AI standards and practices. Organizations must prioritize compliance to avoid substantial penalties and to maintain their competitive edge in the market.

References

[1] https://www.aoshearman.com/insights/ao-shearman-on-tech/zooming-in-on-ai-cybersecurity-requirements-for-ai-systems
[2] https://intellias.com/eu-ai-act-risk-levels/
[3] https://www.jdsupra.com/legalnews/zooming-in-on-ai-18-cybersecurity-2928431/
[4] https://link.springer.com/article/10.1007/s10664-025-10645-x
[5] https://www.lexology.com/library/detail.aspx?g=a1aa3da8-4ba8-4967-8f28-1981bae8bc2b
[6] https://nordpass.com/blog/it-regulatory-landscape-in-2025/
[7] https://www.signicat.com/blog/ai-act-what-you-need-to-know