Introduction

AI literacy is increasingly recognized as a critical competency for organizations and individuals involved in the development and deployment of AI systems. The European Union’s Artificial Intelligence Act (AI Act) [3], effective February 2, 2025 [1], underscores the importance of AI literacy by mandating it for all relevant stakeholders. This legislation aims to ensure informed decision-making regarding AI systems, balancing the recognition of risks, opportunities [1] [2], and potential harms [1].

Description

AI literacy is defined as the skill [1] [2], knowledge [1] [2], and understanding necessary for organizations and individuals to make informed decisions regarding the deployment of AI systems [1], recognizing both the risks and opportunities [1], as well as potential harms [1]. The EU Artificial Intelligence Act (AI Act) [3], effective February 2, 2025 [1], mandates AI literacy for all organizations involved in the development and deployment of AI systems [2]. This includes both “providers” (those who develop AI systems) and “deployers” (users) [1], which can encompass organizations and individuals [1] [2]. Importers and distributors of AI systems have separate obligations under the act [1].

The AI Act adopts a risk-based approach, categorizing AI systems into different risk tiers [1] [2], with high-risk AI systems encompassing approximately 98% of the legal provisions [3]. These high-risk systems are subject to compliance, certification [3], and CE marking [3], similar to the regulation of medical devices [3]. Transparency obligations require that users are informed when engaging with AI systems [3], emphasizing the importance of human dignity in interactions with technology [3]. Additional requirements for models deemed to have systemic risk include risk assessments [3], incident reporting [3], and cybersecurity measures [3], based on computational power and designation by the AI Office [3].

While the AI literacy principle applies universally [1], compliance with AI literacy principles is essential for both providers and deployers [1], with obligations for importers and distributors differing [1]. It is important to note that roles can change [1], and an importer or distributor may become a provider [1], which would necessitate adherence to AI literacy principles [1]. The scope of the AI Act has evolved [3], expanding the list of prohibited use cases from four to eight [3], now including areas such as emotion recognition [3], and broadening the definition of high-risk applications [3], thereby increasing compliance demands [3].

Organizations must consider their size [2], resources [1] [2] [3], and the roles of their employees when implementing AI literacy measures [2]. Larger organizations typically have more resources for compliance than smaller entities [1], which may be taken into account by regulators [1]. Employees in technology-facing roles require a higher level of literacy compared to those in non-technical positions [2]. New obligations for deployers now include conducting fundamental rights impact assessments, highlighting the need for organizations to justify their compliance measures [1]. Guidance from regulators [2], such as the Dutch authority [2], indicates that AI literacy levels should align with the context of AI system use and the affected groups [1] [2].

To comply with the AI Act [2], organizations should develop effective training modules and policies tailored to their specific AI systems and governance strategies [2]. Training should encompass foundational knowledge, regulatory frameworks [3], risk management, and data protection, ensuring that both employees and external collaborators acquire adequate skills. It should focus on key employees involved in AI use or development and should be dynamic [1], fostering a comprehensive understanding of AI risks and opportunities [1]. Organizations can choose between remote or in-person training based on the baseline knowledge of their staff and the need for interaction with instructors. Training must be ongoing to keep pace with technological advancements.

Enhancing policies and conducting workshops can help employees understand the organization’s approach to AI and compliance [2]. The European AI Office has created a Living Repository of AI Literacy Practices [1] [2], which provides examples of how companies are addressing AI literacy and meeting their obligations under the act [1]. While this repository does not serve as compliance guidance [1], it is a useful resource for organizations in similar sectors [1].

As guidance from national regulators and EU authorities remains limited [2], organizations are encouraged to proactively address AI literacy requirements ahead of formal enforcement [2], which will begin in the coming months [2]. This proactive approach is essential for fulfilling both AI literacy obligations and broader AI governance frameworks [2]. Businesses must navigate their compliance requirements [3], and consistent enforcement across EU member states is crucial [3]. Regulators and courts are encouraged to adopt interpretations of the law that are realistic and supportive of innovation [3]. Harmonized standards are particularly vital for small and medium-sized enterprises (SMEs) that may lack the resources for compliance development [3]. Early action is essential for ensuring that employees and stakeholders understand and align with the organization’s AI governance framework [1], risk tolerance [1], and policies [1] [2].

Conclusion

The AI Act’s emphasis on AI literacy has significant implications for organizations and individuals involved in AI system development and deployment. By mandating AI literacy [2], the Act seeks to foster informed decision-making, enhance compliance [1] [2] [3], and promote ethical AI use. Organizations must proactively address these requirements to align with the Act’s provisions, ensuring that they are well-prepared for the upcoming enforcement period. This proactive approach will not only facilitate compliance but also support innovation and responsible AI governance across the EU.

References

[1] https://www.jdsupra.com/legalnews/podcast-r-g-tech-studio-navigating-ai-3880307/
[2] https://www.lexology.com/library/detail.aspx?g=7fb345b9-cf98-4124-94bc-2bbb723cd4c2
[3] https://www.hi-paris.fr/2025/04/17/the-eu-ai-act-where-it-landed-and-where-it-might-go/