Introduction

The European Union is advancing its regulatory framework to address the complexities of liability associated with AI technologies through two key legislative initiatives: the Revised EU Product Liability Directive (PLD) and the AI Liability Directive (AILD) [4]. These directives aim to complement the EU AI Act and establish a comprehensive regulatory framework for AI [4], reflecting the impact of digitalization on the international economic landscape [2].

Description

The PLD [1] [4] [5], adopted by the European Council on 11 October 2024 [3], significantly broadens the definition of “product” to encompass not only physical goods and electricity but also intangible goods such as software and AI systems. This expansion extends liability considerations to digital goods [4], ensuring that consumers [4] [6], as well as businesses, can seek compensation for personal injury [4], property damage [4], or data loss resulting from defective AI products [4]. The directive introduces non-contractual strict liability claims [4], allowing individuals to claim compensation without needing to prove fault or negligence. It also enhances consumer rights by requiring manufacturers to disclose relevant evidence during liability claims [4], thereby easing the burden of proof on claimants [2] [4]. If manufacturers fail to comply with these disclosure obligations [2], a presumption of defectiveness arises [2] [3], increasing liability risks for companies involved in the trade of digital and smart products [2].

A key innovation of the PLD is the introduction of a presumption of defect and a presumption of causal link, which simplifies the process for injured parties, particularly in cases involving complex products like AI systems [3]. Manufacturers are required to maintain control over their products post-release [3], ensuring necessary updates and security measures are in place [3]. The directive allows for contractual limitations of liability among operators [3], but not against injured parties [3]. Damages claimable by injured parties include physical injury [3], material losses [3], and moral damages [3], while damage to the defective product itself is excluded [3]. The legal action timeframe remains three years [3], with an additional 25-year period for cases involving slow-onset bodily injuries [3]. Substantially modified products are treated as new [3], resetting the limitation period [3].

The AILD focuses on adapting non-contractual civil liability rules specifically for AI-enabled products and services [4]. It aims to ensure that individuals harmed by AI systems receive protection comparable to that afforded by other technologies in the EU [1] [5]. The directive seeks to facilitate claims for harm caused by AI, including malfunctions and unintended consequences [4], and establishes mechanisms such as rebuttable presumptions and evidence disclosure obligations for high-risk AI systems to ease the burden of proof for claimants. The AILD is expected to introduce categories that trigger these obligations, particularly for general-purpose AI systems [7], autonomous vehicles [7], and insurance applications beyond health and life insurance [7]. However, there are concerns regarding potential overlaps or inconsistencies with the AI Act, particularly since the AILD does not address key elements of the AI Act [5], such as prohibited AI practices and the necessity for human oversight of high-risk AI systems [5].

The European Parliament has resumed discussions on the AILD following a complementary impact assessment, which evaluates the proposal’s relevance in light of the recently approved AI Act [1] [5]. This assessment highlights potential outdated elements and shortcomings, leading to uncertainty about whether the European legislator will withdraw the AILD proposal in favor of a new one that aligns better with the current legislative framework [5]. The impact assessment suggests expanding the directive’s scope to encompass general-purpose and “high-impact” AI systems [5], as well as software [1] [5], and proposes a mixed liability framework that combines fault-based and strict liability [1] [5]. It advocates for a shift from an AI-specific directive to a broader software liability regulation to enhance legal consistency across the EU [1] [5].

The European Commission will monitor AI incidents and may introduce further regulations [4], including potential strict liability for high-risk AI systems and mandatory insurance coverage [4]. The PLD has been formally adopted and will come into force shortly [4], while the AILD is still under legislative consideration [4].

These directives aim to simplify the process for harmed individuals and businesses to obtain compensation [4], with the strict liability approach of the PLD appealing to claimants [4]. The AILD provides an avenue for claims against users of AI systems and facilitates class actions and collective claims [4], enhancing consumer protection at the national level within EU member states [4].

The extraterritorial implications of the PLD and AILD may influence global standards for AI regulation [4], allowing EU-based consumers to seek compensation from EU entities in the supply chain for products manufactured outside the EU [4]. Given the technical complexities of AI [1], the proposed regulations aim to modernize product liability rules for the digital age, addressing the challenges injured parties may face in substantiating claims for damages caused by AI interactions. Additionally, the framework should allow for rebuttal of causality presumption if initial violations of the AI Act are rectified later [7], and human oversight mechanisms mandated by the AI Act should establish a direct presumption of causality between AI outputs and damages for non-compliance with monitoring obligations [7].

Tort law presents significant legal risks for developers who fail to implement adequate safety measures during the development and deployment of AI systems [7]. There is considerable uncertainty regarding the application of existing tort doctrine to AI [7], with jurisdictional variations potentially leading to liability risks and costly legal disputes [7]. Developers who do not adopt leading safety practices may face increased liability exposure [7], particularly concerning third-party misuse of their models [7]. To mitigate these risks [7], safety-focused policymakers and industry bodies should promote and formalize new safety standards and procedures [7].

Conclusion

The European Union’s initiatives through the PLD and AILD represent significant steps toward establishing a robust regulatory framework for AI liability. By broadening the scope of liability and introducing mechanisms to ease the burden of proof, these directives aim to protect consumers and businesses from the risks associated with AI technologies. The potential global influence of these regulations underscores the EU’s leadership in setting standards for AI governance. As the legislative process continues, the focus remains on ensuring that the regulatory framework is comprehensive, consistent [1] [2] [5], and adaptable to the evolving digital landscape.

References

[1] https://www.engage.hoganlovells.com/knowledgeservices/news/the-new-impact-assessment-on-the-eus-ai-liability-directive-proposal-an-uncertain-future-ahead/
[2] https://www.clydeco.com/en/insights/2024/10/reform-of-european-product-liability-directive-new
[3] https://www.lexology.com/library/detail.aspx?g=6bd756af-231d-4680-981e-1ce771605a4a
[4] https://www.michalsons.com/blog/eu-liability-directives-related-to-ai-revised-pld-and-aild/76166
[5] https://www.jdsupra.com/legalnews/the-new-complementary-impact-assessment-2465304/
[6] https://www.business-humanrights.org/en/latest-news/eu-proposed-ai-liability-directive-to-make-it-easier-to-sue-ais-give-business-legal-certainty/
[7] https://www.linkedin.com/pulse/ai-liability-crucial-next-step-regulation-akhlaq-ahmad-jofyf