Introduction

The regulation of facial recognition technology (FRT) and Advanced Remote Biometric Identification (RBI) within the European Union is a subject of ongoing debate, focusing on the protection of individual freedoms, privacy [2] [3] [6] [7] [8] [9], data protection [2] [5] [6] [7] [8] [9], and human rights [6]. This discourse is particularly relevant given the rapid advancements in AI technologies and their increasing application across various sectors. The challenges lie in balancing technological capabilities with the risks associated with personal data protection [6], mass surveillance [6] [9], and national security [6].

Description

In the European Union [3] [4] [6] [8], there is an ongoing debate regarding the regulation of facial recognition technology (FRT) and Advanced Remote Biometric Identification (RBI) to protect individual freedoms, with a focus on privacy [6], data protection [2] [5] [6] [7] [8] [9], and human rights [6]. FRT [4] [5] [6] [7] [9], which involves the use of advanced AI systems to process biometric data, raises significant legal implications due to its biometric nature [7], particularly concerning fundamental rights and democratic values [7]. While substantial advancements have been made in its development [4], legal scrutiny regarding its implications for individual rights has only recently gained traction [4]. Law enforcement agencies and private companies are increasingly testing these AI technologies for public safety purposes, utilizing FRT to identify individuals based on unique facial characteristics—defined as a digital representation of these traits—and RBI to match biometric features from a distance with stored data. The use of these technologies spans various sectors, including banking [6], transport [6], health [6], and elections [6], necessitating a careful balance between technological capabilities and the risks associated with personal data protection [6], mass surveillance [6] [9], and national security [6].

Concerns have been raised about the potential for discrimination and the infringement of data privacy rights, particularly as some Member States have previously disregarded EU regulations on user data privacy in law enforcement [9]. This includes retaining personal data from telecommunications providers [9], which has enabled access to user location data. Although the EU allows limited data retention for security reasons [9], instances of mass data retention have occurred, infringing on privacy rights [9]. The analysis suggests a dual approach to securing biometric facial image data [6], integrating both privacy and cybersecurity legal frameworks [6]. Security breaches during data storage and transmission could lead to identity theft and other forms of harassment [6], threatening individual security and raising issues of mass surveillance and state overreach [6], which can impact political participation and social equality [6].

Despite the EU’s more stringent regulations [5], including the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act), there is ongoing criticism regarding the use of FRT [5]. The AI Act aims to balance security needs with the protection of individual rights and is positioned as a potential model for governing biometric data usage beyond law enforcement [8]. However, challenges remain in its effective enforcement, particularly concerning the authorization process and the tension between security and individual rights [8]. Issues include vague definitions within the AI Act [5], shifts in policy from an outright ban on real-time FRT to a more flexible approach with exceptions [5], and uncertainty surrounding the implementation of these technologies by private entities outside of law enforcement [5]. Notably, certain AI systems involving real-time remote biometric identification in public spaces are prohibited unless specific law enforcement conditions are met and approved by a court [1], extending to applications that may affect security or fundamental rights [1].

In the UK [2] [3] [5], the deployment of biometric technologies, including facial recognition and emotion recognition [3], has raised legal compliance concerns, necessitating a new governance approach [3]. The Ada Lovelace Institute advocates for a risk-based regulatory framework similar to the EU’s AI Act [3], particularly addressing live facial recognition (LFR) used by law enforcement [3]. A 2020 UK Court of Appeal ruling deemed the South Wales Police’s use of real-time facial recognition unlawful under the European Convention on Human Rights [3], particularly regarding privacy and freedom of assembly [3]. This ruling established mandatory standards for police use of LFR [3], yet current police guidelines do not fully align with the ruling [3], creating uncertainty about the legality of newer LFR deployments [3].

AI experts in the UK are advocating for stricter regulations on facial recognition technology due to concerns over privacy [2], misuse [2], and inherent biases in existing systems [2]. The current regulatory framework is viewed as inadequate [2], creating a legal grey area that undermines public trust [2]. Reports highlight the fragmented nature of the UK’s approach to regulating facial recognition and biometric technologies [2], emphasizing the need for a comprehensive legal framework that establishes clear guidelines and tiered obligations based on the risk associated with different uses of the technology [2]. Such a framework would enhance privacy protections and accountability, addressing the significant challenges posed by the absence of robust regulations for both government and industry.

The growing reliance on AI in decision-making by public authorities and large corporations poses significant implications [9], particularly for marginalized groups [9]. A notable case in the Netherlands involved the tax authorities using an algorithm to create “risk” profiles for identifying childcare benefits fraud [9], disproportionately affecting families from ethnic minorities and lower-income backgrounds [9]. This led to many families being unjustly denied assistance [9], with severe consequences for children [9]. In response, civil society advocates have called for amendments to the AI Act to grant individuals the right to seek redress when adversely affected by AI systems [9], a provision not included in the original proposal by the Commission [9].

The existing governance model for biometrics in the UK is fragmented and inadequate [3], highlighting the need for a comprehensive legal framework that covers both police and private sector use, as well as inferential biometric systems [3]. The proposed biometric rulebook should categorize systems based on their risk to fundamental rights and include safeguards such as transparency [3], notification requirements [3], and technical standards for efficacy and discrimination [3]. Recent updates to the EU’s cybersecurity legal framework aim to enhance the security of digital infrastructure [6], particularly in the context of large-scale biometric data processing in law enforcement [6]. The potential for abuse and widespread surveillance poses risks to democratic values [6], as national security and law enforcement remain under the jurisdiction of individual Member States [6]. The introduction of centralized biometric databases [6], such as those proposed under Prüm II [6], raises concerns about their vulnerability to cyberattacks [6], emphasizing the need for robust technical security standards and privacy protections [6]. The importance of procedural fairness and judicial independence in protecting fundamental rights is underscored [8], as fragmentation in cybersecurity approaches among Member States could further complicate the effective regulation of biometric data [6]. A balanced approach that prioritizes both security and individual rights is essential [8], along with ongoing efforts to refine regulatory frameworks to meet emerging challenges in the context of technological advancement [8].

Conclusion

The ongoing debate over the regulation of facial recognition and biometric technologies in the EU and the UK underscores the complex interplay between technological advancement and the protection of individual rights. While these technologies offer significant benefits, they also pose substantial risks to privacy, data protection [2] [5] [6] [7] [8] [9], and democratic values [4] [6] [7]. The need for comprehensive [2], coherent, and enforceable regulatory frameworks is critical to ensuring that these technologies are used responsibly and ethically, safeguarding individual freedoms while addressing security concerns. The evolving legal landscape must continue to adapt to the challenges posed by rapid technological advancements, ensuring that the balance between innovation and rights protection is maintained.

References

[1] https://www.novagraaf.com/en/insights/eu-ai-act-new-legal-framework-ethical-safe-and-innovative-use-artificial-intelligence
[2] https://opentools.ai/news/ai-experts-call-for-stricter-regulations-on-facial-recognition-in-the-uk
[3] https://www.biometricupdate.com/202505/ada-lovelace-institute-questions-legality-of-facial-recognition-in-uk
[4] https://digitalsociety.eui.eu/publication/next-democratic-frontiers-for-facial-recognition-technology-frt/
[5] https://techgdpr.com/blog/comparing-the-uk-and-eu-framework-on-facial-recognition-technology/
[6] https://ai-regulation.com/biometric-data-and-facial-recognition-technology-in-the-eu/
[7] https://link.springer.com/chapter/10.1007/978-3-031-89794-81
[8] https://link.springer.com/chapter/10.1007/978-3-031-89794-8
5
[9] https://www.brusselstimes.com/292835/eu-politicians-split-between-innovation-and-human-rights-in-ai-regulation-bill