Introduction

The AI Act grants enhanced powers to fundamental rights protection authorities to ensure that high-risk AI systems respect fundamental rights. This legal framework enforces safeguards for AI technologies, particularly those used in critical areas such as facial recognition, hiring [1], and policing. The Act mandates EU Member States to designate national competent authorities to oversee compliance, with a focus on protecting human rights.

Description

Fundamental rights protection authorities are granted enhanced powers under the AI Act to address violations caused by AI systems [2], ensuring that high-risk technologies respect fundamental rights [1]. The Act establishes a legal framework that enforces safeguards for high-risk AI systems [1], including those used in facial recognition, hiring [1], and policing. By November 2, 2024 [4], EU Member States are required to identify and publicly list these authorities [2] [4], notifying the European Commission and other Member States of their designations [4]. Each Member State must designate national competent authorities [5], including at least one notifying authority responsible for managing conformity assessment procedures and one market surveillance authority (MSA) empowered to investigate and enforce compliance with the AI Act.

In Ireland, preparations are underway for nine authorities to assume new responsibilities under the AI Act by August 2026. However, concerns have been raised regarding the lack of additional funding to support these duties, which may hinder the enforcement of fundamental rights protections against AI-related harms [3]. The government must appoint regulators by August 2025 to enforce bans on dangerous AI systems [3], but not all regulators have been identified [3], and many lack the necessary expertise in fundamental rights. Currently, only the Data Protection Commission has a mandate that encompasses fundamental rights [3], placing an additional burden on fundamental rights bodies that may need to assist and train these regulators [3].

The European AI Office [2], along with national market surveillance authorities [2], is tasked with the implementation [2] [5], supervision [2] [3] [4] [5], and enforcement of the AI Act [2] [3] [5], particularly for high-risk AI systems that impact critical areas such as biometrics [4], education [4], employment [3] [4], access to essential public services [4], law enforcement [2] [3] [4] [5], immigration [4], and the administration of justice [4]. These bodies will also support Ireland’s AI regulatory sandbox [3], which allows companies to test AI products under regulatory supervision [3], further increasing their responsibilities [3]. Effective enforcement of the AI Act requires strong technical knowledge of AI and a clear understanding of its requirements [3], yet only the Data Protection Commission appears to possess this expertise [3]. Other authorities face challenges in upskilling staff while managing existing duties [3], risking inadequate enforcement [3].

The AI Act emphasizes the protection of fundamental human rights by mandating comprehensive assessments of AI systems’ impacts on individuals and communities [1], integrating Human Rights Impact Assessments (HRIAs) into its framework to hold developers accountable for respecting rights such as privacy, non-discrimination [1], and freedom of expression [1]. Starting August 2, 2026 [4], the majority of the AI Act’s provisions will be fully enforceable [4]. By August 2, 2025 [3] [5], all Member States must empower their MSAs [5], leading to a variety of national enforcement models [5]. The European Commission is actively working towards compliance with the AI Act timeline [4], as demonstrated by the publication of the GPAI Code of Practice and related FAQs on July 10, 2025 [4]. This Code of Practice is currently under review [4], and once endorsed, organizations will face compliance obligations beginning August 2, 2025 [4]. Additionally, the European Commission has issued Guidelines pertaining to GPAI models [4], and the AI Office will provide a final template for GPAI providers [4].

The role of data protection authorities (DPAs) has been a significant topic of discussion [5], with the European Data Protection Board (EDPB) recommending their designation as MSAs for high-risk AI systems [5]. The EDPB has urged Member States to consider appointing DPAs for a broader range of high-risk systems [5], recognizing their expertise as beneficial for the effective enforcement of the AI Act [5]. In Poland [5], a draft law proposes the establishment of a new market surveillance authority [5], the Commission for the Development and Safety of Artificial Intelligence [5], which will incorporate representatives from four existing authorities to oversee AI systems [5].

The General Data Protection Regulation (GDPR) complements the AI Act by safeguarding personal data and privacy for individuals in the EU [1], establishing rights over personal data and imposing accountability on organizations that process such data [1]. In the context of high-risk technologies [1], the GDPR ensures fair and transparent handling of personal information [1], reinforcing individual rights [1]. HRIAs and the GDPR are interconnected [1], both aiming to protect individuals’ rights [1], particularly regarding privacy and data protection [1]. While the GDPR outlines legal obligations for data handling [1], HRIAs assess broader human rights impacts [1], identifying risks that may not be fully addressed by the GDPR alone [1]. Together [1] [4], they create a comprehensive framework for ensuring that new technologies uphold fundamental rights [1].

The future of human rights protection in AI governance hinges on translating abstract principles into enforceable practices centered on human dignity [1]. As AI systems increasingly influence decision-making [1], establishing meaningful safeguards against harm is essential [1]. Mandatory [1], transparent [1], and participatory HRIAs are vital for early risk identification and accountability in technological development [1]. Effective implementation requires robust legal frameworks [1], continuous oversight [1], and genuine engagement with affected communities [1], ensuring that human rights are foundational to innovation that serves the common good [1]. The challenge lies in creating governance structures that are adaptable and principled [1], capable of keeping pace with technological advancements while upholding justice [1], equality [1], and freedom [1]. Immediate government action is necessary to provide resources and training to ensure that regulatory bodies can effectively protect fundamental rights as AI deployment increases [3].

Conclusion

The AI Act represents a significant step forward in ensuring that AI systems respect fundamental human rights. By empowering national authorities and integrating comprehensive assessments, the Act aims to safeguard individuals and communities from potential AI-related harms. However, effective enforcement requires adequate resources, expertise [3] [5], and collaboration among Member States [5]. As AI technologies continue to evolve, the challenge will be to maintain robust governance structures that uphold justice, equality [1], and freedom [1], ensuring that innovation serves the common good [1].

References

[1] https://www.humanrightsresearch.org/post/risky-algorithms-real-rights-unpacking-human-rights-impact-assessments-for-facial-recognition-and
[2] https://digital-strategy.ec.europa.eu/en/policies/fundamental-rights-protection-authorities-ai-act
[3] https://completeaitraining.com/news/irelands-fundamental-rights-bodies-left-unprepared-for-eu/
[4] https://natlawreview.com/article/eu-ai-act-compliance-deadline-august-2-2025-looming-general-purpose-ai-models
[5] https://www.linkedin.com/pulse/national-competent-authorities-under-eu-ai-act-krzysztof-wyderka-f1tuf