Introduction

The legal landscape surrounding artificial intelligence (AI) is undergoing significant transformation due to major legal battles in the UK, US [9], and EU [9]. These cases primarily focus on the unauthorized use of copyrighted works in AI development [7], with over 40 lawsuits initiated against AI companies since 2022 [7]. The outcomes of these cases are poised to reshape the AI industry, affecting developers, deployers [5], investors [5], and data owners [5].

Description

Major legal battles in the UK [5], US [9], and EU are poised to reshape the AI landscape [5], with over 40 lawsuits initiated against AI companies since 2022 [7], primarily focusing on the unauthorized use of copyrighted works in AI development [7]. Significant copyright lawsuits such as Getty Images v. Stability AI [1] [2] [3] [4] [5] [6] [9], Bartz v [9]. Anthropic [7] [8] [9], and Disney and Universal v. Midjourney raise critical issues for developers [5], deployers [5], investors [5], and data owners [5]. These cases address the use of copyrighted works in AI training and whether such use constitutes copyright infringement [9], underscoring the risks associated with training data [5], model outputs [5], and intellectual property [1] [2] [3] [5] [8] [9]. Stakeholders are prompted to reevaluate their legal strategies and governance practices in light of these developments.

The outcomes of these lawsuits could significantly impact foundation AI model developers [5], potentially affecting the viability of both foundational and application-level AI development [5]. The Bartz v [9]. Anthropic case [7] [8] [9], for instance, has established that using legally purchased books for AI training can qualify as fair use; however [9], the retention of pirated works in a “central library” was deemed not to be fair use, leading to potential liability for damages [8]. This highlights the nuanced and case-specific nature of fair use determinations, which fail to provide a definitive answer on the matter [7]. Concerns have been raised by authors regarding AI-generated content competing with their works, leading to market confusion and potential revenue loss [7]. Judge William Alsup ruled that copyright law should not shield authors from competition [7], while Judge Vince Chhabria suggested that the fair use determination should consider the specific circumstances of each case [7], indicating that established authors might have less standing than emerging writers adversely affected by AI-generated content [7].

In the Getty Images v [9]. Stability AI case [1] [2] [3] [5] [9], a significant legal claim has emerged regarding secondary infringement [6], with allegations that Stability has imported an article that constitutes an infringing copy of Getty’s copyrighted works [6], in violation of section 22 of the Copyright [6], Designs and Patents Act (CDPA) [6]. Getty’s primary claims include unauthorized use of its photographs in the training of Stability AI’s Stable Diffusion system [3], which generates images from text or image prompts [3]. Stability relies on “E Commerce Safe Harbours,” particularly the “hosting defense,” which protects intermediaries from liability for user-generated content [4], even post-Brexit [4]. Stability claims it functions similarly to Google by merely processing data uploaded by users [4], asserting it bears no responsibility for any infringing content created by them [4]. However, Getty disputes this characterization [4], arguing that Stability does not qualify as an intermediary due to its bi-partite relationship with users [4], which disqualifies it from the protections of the hosting defense [4]. Getty further contends that Stability’s operations exceed the passive role required to benefit from this safe harbor [4].

This ongoing trial in London raises critical legal questions about whether an article can be deemed an infringing copy if it no longer contains the copyrighted material and whether intangible items [6], such as model weights [6], can qualify as articles under the law [6]. Stability AI argues that the term ‘article’ refers to tangible objects [3], challenging the applicability of sections 22, 23(a) [3], and 23(b) of the CDPA to intangible copies produced by its AI system. Getty argues that the definition of an article encompasses both tangible and intangible items [6], while Stability contends that the term implies a tangible object [6]. If Getty overcomes these legal challenges [6], it must demonstrate that Stability’s model is indeed an infringing copy and that the importation of the model occurred with the requisite knowledge or reason to believe this. Stability warns that a ruling against it could restrict UK users’ access to AI models like Stable Diffusion [4], prompting political calls for legal reform to foster economic growth [4]. However, such a ruling could undermine the creative industries [4], which significantly contribute to the UK economy [4], generating £126 billion in gross added value in 2022 [4].

In response to the evolving legal landscape, the UK Data (Use and Access) Bill [1], which received royal assent on 19 June 2025 [1], aims to balance the need for AI systems to access data for innovation while protecting the rights of creators whose works may be used for training these systems [1]. The legislative process involved extensive debate [1], with high-profile artists advocating for stricter protections and the tech industry pushing for more flexible regulations to foster economic growth [1]. A progress statement is required by December 2025 [1], detailing the economic impact and use of copyrighted works in AI development [1], alongside a comprehensive impact assessment report expected by March 2026 [1], which will address the economic implications of using copyrighted materials in AI training [1], including practices like data scraping and metadata usage [1].

For those deploying AI models [5], the primary concern revolves around copyright infringement related to model outputs [5], a risk that remains largely untested [5]. Deployers must carefully review IP infringement indemnities in their agreements [5], as broad indemnities may not provide the anticipated protection [5]. Effective governance can mitigate infringement risks [5], but the prevailing legal uncertainties necessitate ongoing vigilance [5]. The Getty Images v [1] [9]. Stability AI case also raises significant trademark issues [9], particularly regarding the use of copyrighted works to train AI models [9], further complicating the landscape. A ruling in favor of Getty could limit the availability of AI models to users in the UK, potentially allowing Stability to continue offering web-based versions without liability [6].

Investors in AI must adapt their valuation and diligence approaches in light of the evolving legal landscape [5], focusing on technology and use case-specific analyses of AI risks [5]. Legal scholar Mark Lemley has noted that while AI training may be viewed as fair use [7], AI companies must clarify how their models avoid copyright infringement [7]. Data owners are closely monitoring these cases [5], as the outcomes may accelerate the development of a data licensing market [5], which has already begun to take shape with various licensing structures emerging [5]. The Like Company v [9]. Google Ireland case is significant for determining whether AI-generated outputs infringe on reproduction rights and how the text and data mining exceptions apply [9], influencing how AI technologies are governed within the EU’s legal framework [9].

The implications of copyright litigation extend beyond copyright law itself [5], influencing other legal areas such as privacy [5], particularly concerning the repurposing of data for AI model development [5]. As understanding of how large language models retain training data evolves [7], further lawsuits from authors whose works can be reproduced by these models are anticipated [7]. The ongoing dialogue surrounding fair use emphasizes the need to balance innovation with the protection of creators’ rights [9], ensuring that legal systems adapt to address the challenges posed by new technologies while safeguarding intellectual property rights [9]. As the legal landscape continues to evolve [5], all participants in the AI ecosystem must remain alert to shifting risks and adapt their strategies accordingly [5].

Conclusion

The ongoing legal battles in the AI sector are set to have profound implications for the industry. The outcomes of these cases will not only influence copyright law but also impact other legal areas such as privacy and data protection. As the legal landscape evolves [5], stakeholders in the AI ecosystem must remain vigilant and adapt their strategies to navigate the complex and changing environment. The balance between fostering innovation and protecting intellectual property rights will be crucial in shaping the future of AI development and deployment.

References

[1] https://www.stephens-scown.co.uk/intellectual-property-2/data-protection/uk-data-use-and-access-bill-what-is-next-for-ai-and-copyright-law/
[2] https://legalnewsfeed.com/2025/07/11/generative-ai-faces-legal-challenges-as-copyright-lawsuits-mount-against-tech-firms/
[3] https://www.pinsentmasons.com/out-law/analysis/getty-images-v-stability-ai-copyright-claims-significance
[4] https://www.shma.co.uk/our-thoughts/getty-images-vs-stability-ai/
[5] https://www.jdsupra.com/legalnews/copyright-in-ai-key-implications-from-7539853/
[6] https://www.taylorwessing.com/en/insights-and-events/insights/2025/07/getty-v-stability
[7] https://www.theatlantic.com/technology/archive/2025/07/anthropic-meta-ai-rulings/683526/
[8] https://theoutpost.ai/news-story/landmark-rulings-on-ai-copyright-implications-for-tech-giants-and-content-creators-17709/
[9] https://opentools.ai/news/generative-ai-faces-copyright-scrutiny-in-landmark-global-legal-battles