Introduction
The European Data Protection Board (EDPB) has released Opinion 282024 [3] [7], offering comprehensive guidance on the development and deployment of AI models. This guidance [1] [2] [9], requested by the Irish Data Protection Commission (DPC) [2] [7], aims to harmonize regulatory practices across Europe, providing essential insights for data protection authorities (DPAs) on interpreting data protection provisions related to AI.
Description
The European Data Protection Board (EDPB) has issued comprehensive guidance [9] [10], encapsulated in Opinion 282024, on the complexities involved in the development and deployment of AI models. Adopted on December 17, 2024 [7], this opinion was developed at the request of the Irish Data Protection Commission (DPC) to promote regulatory harmonization across Europe. It provides essential guidance for data protection authorities (DPAs) regarding the interpretation of relevant data protection provisions [7], although it is not exhaustive [7]. The guidance emphasizes how AI companies can process personal data without individual consent under the General Data Protection Regulation (GDPR), provided that the final application does not disclose any private information [10]. It aims to clarify key compliance questions and assist Supervisory Authorities throughout the EU/EEA in regulating AI products responsibly.
The guidance outlines a case-by-case approach for determining when AI models can be considered anonymous, highlighting the necessity of evaluating anonymity based on specific circumstances. DPAs must assess the likelihood of personal data extraction from the model and the potential for obtaining such data through queries; both factors must be insignificant for the model to be deemed anonymous [7]. It notes that personal data used for training AI models is not always anonymous and requires careful analysis [8]. For data to be considered truly anonymous [4], the risk of re-identifying individuals from the training data must be minimal [10], and strong technical measures must be in place to prevent re-identification and ensure sufficient anonymization [9]. The opinion provides a non-prescriptive [3] [7], non-exhaustive list of methods to demonstrate this anonymity [3] [7], emphasizing measures to limit personal data collection during training [7].
Regarding legitimate interest [2] [3] [4] [5] [6] [7] [9] [11], the guidance details considerations for DPAs to assess its appropriateness as a legal basis for processing personal data in AI contexts [3]. This assessment involves a necessity test regarding data processing and a balancing test that evaluates the purpose and necessity of the processing, ensuring it is lawful and specific while also considering less intrusive alternatives [11]. Examples such as conversational agents and AI for enhancing threat detection systems illustrate potential legitimate interest applications. However, processing must be strictly necessary [3], and a balancing test must demonstrate that the legitimate interest in processing data outweighs the impact on individuals’ rights [9]. The EDPB emphasizes that businesses must substantiate their claims of legitimate interest [5], particularly in the context of AI development and deployment [5].
The balancing test requires careful consideration of individual circumstances [11], focusing on potential risks to fundamental rights during AI model development and deployment [11]. Regulators must evaluate the reasonable expectations of data subjects [10] [11], including the anticipated use of their data [11], its public availability [3] [10] [11], and the context of its collection [3] [11]. If the balancing test indicates that individual interests outweigh those of the processors [11], tailored mitigation measures should be implemented based on the specific characteristics of the AI model [11]. These measures may include technical solutions for anonymization [11], pseudonymization [7] [9] [11], data masking [11], enabling individual rights [4] [9] [11], and enhancing transparency [11].
Companies must also establish mechanisms for data subjects to exercise their rights [9], including opting out [9], requesting erasure [9], or correcting data [9]. Transparency in data processing practices is essential [9], and the lawfulness of data processing must be evaluated during both the development and deployment phases [9], particularly if another firm is involved in processing personal data [9]. The UK Information Commissioner’s Office (ICO) has indicated that businesses can only scrape personal data from the internet for training generative AI models if they can demonstrate a valid legitimate interest [5]. If the balancing test indicates potential negative impacts on individuals [3], mitigating measures may be necessary to alleviate these effects [3].
The guidance also addresses the consequences of unlawfully training AI models with personal data, indicating that EU regulatory authorities have discretion in enforcement actions [2], which may include fines [2], processing limitations [2] [7] [11], or data erasure [2]. It notes that while unlawful actions may occur [2], regulatory outcomes are not predetermined and require individual case analyses [2]. The EDPB emphasizes that the use of unlawfully processed personal data in developing AI models could affect the legality of their deployment unless the data has been properly anonymized [3]. Non-compliance with GDPR can result in significant fines [9], potentially up to €20 million or 4% of annual turnover [9], and companies may be required to alter or delete their AI models [9].
The EDPB aims to provide guidance for case-by-case analyses [3], acknowledging the diversity and rapid evolution of AI models [3], and is also working on more specific guidelines [3], including issues related to web scraping [3], which is critical for training AI models with large datasets [4]. The opinion has been positively received by the Irish Data Protection Authority [8], which believes it will foster consistent regulation across the EU/EEA [8], providing clarity and guidance to the industry while encouraging responsible innovation [8]. National DPAs are not restricted by the EDPB’s guidance [11], indicating that individual cases [11], such as complaints against AI systems [11], will be evaluated based on accountability and principles like Privacy by Design [11]. The Computer & Communications Industry Association (CCIA) has expressed support for the ruling [4], emphasizing the need for access to quality data to enhance AI accuracy and mitigate biases [4], while also calling for clearer legal guidelines to prevent future uncertainties [4].
Conclusion
The EDPB’s Opinion 282024 serves as a pivotal document in guiding the responsible development and deployment of AI models across Europe. By addressing key issues such as data anonymization, legitimate interest [2] [3] [4] [5] [6] [7] [9] [11], and compliance with GDPR [9], the guidance aims to harmonize regulatory practices and foster innovation while safeguarding individual rights. The document’s reception by various stakeholders underscores its importance in shaping the future of AI regulation, ensuring that AI technologies are developed and used in a manner that respects privacy and data protection principles.
References
[1] https://www.dataprotection.ie/en/irish-data-protection-commission-welcomes-edpb-opinion-use-personal-data-development-and-deployment
[2] https://www.mcgarrsolicitors.ie/2024/12/18/overview-on-edpb-opinion-28-2024-on-personal-data-in-ai-models/
[3] https://www.edpb.europa.eu/news/news/2024/edpb-opinion-ai-models-gdpr-principles-support-responsible-ai_en
[4] https://www.euronews.com/next/2024/12/19/eu-data-watchdog-sets-terms-for-ai-models-legitimate-use-of-personal-data
[5] https://www.pinsentmasons.com/out-law/news/edpb-opinion-gdpr-ai-adaptability
[6] https://www.jdsupra.com/legalnews/gdpr-and-ai-models-key-insights-from-2117554/
[7] https://legacy.dataguidance.com/news/eu-edpb-releases-opinion-personal-data-processing
[8] https://www.infosecurity-magazine.com/news/edpb-ai-training-personal-data/
[9] https://www.techrepublic.com/article/eu-guidance-ai-privacy-laws/
[10] https://www.csoonline.com/article/3628060/in-potential-reversal-european-authorities-say-ai-can-indeed-use-personal-data-without-consent-for-training.html
[11] https://techcrunch.com/2024/12/18/eu-privacy-body-weighs-in-on-some-tricky-genai-lawfulness-questions/




