Introduction
Emotion recognition artificial intelligence (Emotion AI) is a rapidly evolving field that leverages biometric data to analyze human emotions. This technology [3], rooted in affective computing [1] [3], intersects with various disciplines and has seen significant growth due to advancements in computing and sensor technologies. However, its deployment [2] [3], particularly in the European Union, is subject to stringent regulations under the EU AI Act [1] [3], which aims to mitigate risks associated with AI systems in sensitive areas such as employment and education.
Description
Emotion recognition artificial intelligence (Emotion AI) utilizes biometric and other data [1] [3], such as facial expressions and tone of voice [1] [3], to identify and analyze emotions [1] [3]. This field [1], rooted in affective computing [1] [3], intersects with natural language processing [1], psychology [1], and sociology [1]. Emotion AI has gained traction due to advancements in computing power and sensor technology [3], leading to a projected market growth from USD 3 billion in 2024 to USD 7 billion by 2029 [3]. Its applications are increasingly seen in public safety, customer insights [3], and therapeutic contexts [3].
However, the deployment of Emotion AI is subject to stringent regulations under the EU AI Act [3], which categorizes most AI systems in employment and human resources as “High Risk” or “Prohibited Use.” Effective from 1 August 2024, the Act prohibits the use of AI systems to infer emotions in workplace and educational settings [1] [3], with exceptions only for medical or safety purposes [1] [3]. The Act [1] [2] [3] [4] [5], which is being implemented in stages until 2 August 2026 [4], applies uniformly across all EU member states and prohibits AI applications that present an unacceptable risk [4], such as social scoring [4], manipulation [5], and biometric categorization that infers protected personal attributes [4], including sexual orientation [2], race [2], and religion [2]. The European Commission has provided guidelines to clarify these prohibitions [1] [3], emphasizing the lack of scientific consensus on the reliability of emotion recognition systems and acknowledging cultural variations in emotional expression [1]. Additional requirements for high-risk systems are expected by February 2026 [4].
Two case studies illustrate the implications of these regulations [3]. The first involves a tech company using sentiment analysis software for sales calls [3], which could inadvertently capture the emotions of sales representatives [3]. The second case study focuses on a consultancy firm employing Emotion AI in remote recruitment processes [3], raising concerns about bias and the potential impact on candidates’ employability [3]. The guidelines explicitly state that using emotion recognition systems during recruitment or probationary periods is not allowed [1], and any AI system assessing the emotions of job candidates falls under this prohibition [1].
The EU AI Act broadly defines emotion recognition systems as those that infer emotions from biometric data [1] [3]. While the prohibition does not explicitly mention these systems [3], the guidelines clarify that both emotion recognition and inference are included [3]. The Act allows for some training-related uses of Emotion AI [3], provided results do not influence HR decisions [3], although this exemption is not explicitly stated in the Act itself [3]. Employers may also be classified as “providers” of AI systems [2], broadening their responsibilities and obligations under the Act to include any individual or entity that develops or markets an AI system or model [2], whether for profit or free of charge [2].
The practical application of these regulations poses challenges [3], particularly in performance-based environments where emotional analysis could affect employee morale and lead to legal disputes [3]. Employers must assess the risk classification of AI systems used in the workplace and ensure compliance with the Act [4], including proper communication with employees about their obligations and the assignment of human oversight [4]. Additionally, the guidelines indicate that the definition of ‘workplace’ encompasses both physical and virtual spaces [3], broadening the scope of the prohibition [3].
Organizations must enhance their governance and compliance frameworks regarding AI practices [3], especially concerning employee interactions [3]. For customer-related applications [3], businesses must adhere to the forthcoming regulations for High-Risk AI Systems [3], which will be enforced from August 2026 [1] [3], pending further guidance from the European Commission [1] [3]. Non-compliance with these regulations can result in substantial penalties [5], with fines reaching up to €35 million or 7% of global turnover for serious infringements [5], and up to €15 million or 3% of turnover for violations of high-risk AI obligations [5]. The Act has sparked controversy due to its focus on regulating both potential and actual capabilities of AI systems [4], which may hinder the development and use of new technologies [4].
Conclusion
The EU AI Act’s stringent regulations on Emotion AI highlight the complexities and challenges of deploying such technologies in sensitive areas. While these regulations aim to protect individuals from potential risks, they also pose significant compliance challenges for organizations. The Act’s impact on innovation and the development of new technologies remains a contentious issue, as it seeks to balance technological advancement with ethical considerations and societal safety.
References
[1] https://www.lexology.com/library/detail.aspx?g=c64e59ff-de5d-42eb-b9d8-cd73c0118781
[2] https://leglobal.law/2025/03/26/ireland-a-look-at-the-eu-artificial-intelligence-ai-act-its-impact-on-employers-and-what-they-can-do-to-prepare/
[3] https://www.jdsupra.com/legalnews/eu-ai-act-spotlight-on-emotional-4488457/
[4] https://www.wtwco.com/en-gb/insights/2025/03/eu-comprehensive-ai-act-includes-obligations-for-employers
[5] https://aumans-avocats.com/en/ai-act-applications-and-consequences/