Introduction

Getty Images is embroiled in a landmark legal case against Stability AI, a London-based artificial intelligence company [2]. This cas [3]e [1] [6] [8], which began on June 9th, 2025 [2], in the British High Court [2], represents a significant moment in the generative AI industry. It is the first major copyright trial focused on generative AI [5], addressing allegations that Stability AI unlawfully used millions of copyrighted images [2], including those owned by Getty Images [2], to train its Stable Diffusion model without proper licensing [2].

Description

Getty Images is currently engaged in a significant legal battle against Stability AI, a London-based artificial intelligence company [2], marking a pivotal moment in the generative AI industry [2]. This trial [1] [2] [5] [7], which commenced on June 9th, 2025 [2], in the British High Court [2], is the first major copyright trial focused on generative AI and centers on allegations that Stability AI unlawfully copied and processed millions of copyrighted images, including those owned by Getty Images [2], to train its Stable Diffusion model without obtaining necessary licenses [2]. Getty Images has recently dropped its primary copyright infringement claims [4], a strategic shift that removes a crucial aspect of the lawsuit, and is now focusing on secondary infringement and trademark claims due to weak evidence and a lack of knowledgeable witnesses.

Getty Images asserts that Stability AI’s actions pose a serious threat to artists and the broader entertainment industry [2]. The company claims that the AI’s output sometimes includes distorted versions of Getty watermarks [8], indicating that its images were not only used for training but also partially reproduced [8], which reflects a disregard for copyright protections. This issue is compounded by the AI’s frequent exposure to Getty’s content, leading it to treat the watermark as a natural image feature [1], which could confuse users about a partnership with Stability AI. Additionally, Getty alleges that Stability AI has violated its database rights by extracting content from its database without permission [6]. The remaining claims argue that the AI models themselves may infringe copyright law [4], and that using these models in the UK could be considered importing infringing articles [4], regardless of where the training occurred [4]. Getty’s legal team emphasizes that their lawsuit is not a conflict between creativity and technology but rather a defense against unauthorized use of copyrighted material [2].

In defense [1] [2] [5], Stability AI acknowledges the use of Getty Images’ content for training but contests the validity of Getty’s claims [2], arguing that their use falls under the doctrine of fair use [2], which is broader in the US than the UK’s “fair dealing” exceptions. Stability AI asserts that the training of their model occurred on servers in the US and argues that only a small fraction of the generated images resemble Getty’s content [5]. Their defense hinges on four factors: the purpose of the use [2], the nature of the copyrighted work [2], the amount used [2], and the effect on the market [2]. Stability AI’s lawyers argue that the content is part of “collective human knowledge,” and their use does not negatively impact Getty’s market [2]. They also contend that Getty’s trademark claims will not succeed [4], as consumers do not view the watermarks as commercial messages from the company [4]. Furthermore, post-Brexit changes mean that the UK is no longer under EU jurisdiction, allowing Stability AI to potentially assert rights under Sections 28A and 29A of the Copyright, Designs and Patents Act [3], which permit copying and data mining for non-commercial research without “independent economic significance.”

The trial raises additional legal issues, including trademark infringement and database rights infringement [2] [7]. Getty Images has expressed concerns about the potential for Stability AI to generate harmful content [2], including child sexual abuse material (CSAM) [1] [2], suggesting that if the model can produce such content [1], it must have been trained on related data [1]. However, the judge excluded this claim to maintain the trial’s timeline [1], emphasizing the need for proper procedural safeguards [1]. A Getty employee testified about the iStock database and content upload system [1], revealing challenges in proving that Stability accessed specific images due to a lack of transparency [1]. This highlighted the difficulties claimants face in establishing the exact data used by Stability [1].

The case extends beyond intellectual property [1], addressing societal expectations for ethical AI development [1], the rights of individual creators [1], and the limits of technological freedom [1]. Stability AI argues that legal restrictions could stifle innovation [1], while Getty emphasizes the need to protect creators’ rights [1], especially when their work is used to train competing systems [1]. Experts suggest that while the judge’s ruling may not establish new copyright precedents [5], it could significantly impact future licensing agreements between AI developers and content creators [5]. The High Court is tasked with defining legal boundaries for machine learning [1], determining meaningful creative input [1], and assessing the adequacy of the current copyright framework in light of evolving technologies [1]. A key issue in the case is whether Getty’s photographs possess sufficient creative originality for copyright protection, with Stability contending that the individuality of the photographers’ work [1], particularly in sports events [1], is minimal due to contextual constraints [1]. In contrast [1] [8], Getty presented testimony from experienced photographers who highlighted the skill involved in capturing unique moments [1].

The central question remains whether AI should be allowed to train on protected works without permission or compensation [1], or if AI companies should adhere to the same standards of consent [1], licensing [1] [3] [5] [8], and transparency as other commercial entities [1]. A balanced framework recognizing both human creators’ contributions and generative AI’s opportunities is essential to serve the public interest [1]. The court’s ruling will significantly influence the future of AI law and copyright policy [1], providing guidance on regulating data-driven innovation in the coming years [1]. A conclusion is expected by June 30th, 2025 [2]. A ruling in favor of Getty could impose significant licensing costs on AI companies [8], potentially hindering smaller startups and consolidating AI development among larger firms [8]. Conversely [8], a ruling in favor of Stability AI could affirm the use of online images for AI training [8], which may accelerate AI development but raises concerns among creators about the devaluation of their work [8].

The rise of generative AI has prompted widespread concern among creatives [2], leading to advocacy for stronger protections for creators’ rights [2]. Notable figures in the UK music industry have publicly supported these efforts [2], while similar legal actions are occurring in the United States [2], where Getty’s US division has filed a separate lawsuit against Stability AI for trademark and copyright infringement [4], seeking substantial damages for the alleged unauthorized use of copyrighted images [4]. Stability AI is also facing a lawsuit from a group of visual artists for copyright infringement [4], alongside Midjourney and DeviantArt [4]. In response to the evolving landscape, Getty Images has developed its own generative AI tool that utilizes its own photography and video libraries [4], aiming to foster a collaborative relationship between copyright owners and AI companies while emphasizing the importance of protecting intellectual property in the advancement of AI technologies [2]. Stability AI’s Stable Diffusion model is released under the MIT license [3], a permissive open-source license [3], but the company also monetizes tools like DreamStudio [3], which combines open-source development with commercial offerings [3], further complicating the legal landscape surrounding its use of copyrighted content.

Conclusion

The outcome of this trial will have far-reaching implications for the generative AI industry and copyright law. A ruling in favor of Getty Images could lead to increased licensing costs for AI companies, potentially stifling innovation and consolidating power among larger firms. Conversely [8], a ruling in favor of Stability AI could accelerate AI development but raise concerns about the devaluation of creative works. The case underscores the need for a balanced framework that respects both the rights of human creators and the opportunities presented by generative AI, ultimately shaping the future of AI law and copyright policy.

References

[1] https://lawdit.co.uk/readingroom/getty-images-v-stability-ai-legal-battle-overview
[2] https://www.jdsupra.com/legalnews/getty-images-vs-stability-ai-the-6279559/
[3] https://www.medianama.com/2025/06/223-getty-images-vs-stability-ai-lawsuit/
[4] https://techcrunch.com/2025/06/25/getty-drops-key-copyright-claims-against-stability-ai-but-uk-lawsuit-continues/
[5] https://www.brandsynario.com/getty-images-vs-ai-a-landmark-copyright-lawsuit/
[6] https://www.visive.ai/news/getty-images-vs-stability-ai-key-legal-clash-on-ai-and-copyright
[7] https://www.newsday.com/business/getty-images-stability-ai-copyright-trial-stable-diffusion-x70154
[8] https://natlawreview.com/article/two-major-lawsuits-aim-answer-multi-billion-dollar-question-can-ai-train-your