Introduction

The rapid integration of generative AI technologies, such as large language models like ChatGPT, into organizational workflows presents both opportunities and challenges. While these tools enhance efficiency in various tasks, they also introduce significant legal and ethical risks that organizations must address.

Description

Kilpatrick’s Steve Borgman and Jordan Glassman presented on generative AI technologies [2], particularly large language models like ChatGPT [2], highlighting the risks and liabilities organizations face as employees increasingly utilize these tools [2]. They discussed potential guardrails to manage the associated risks [2], emphasizing the need for organizations to evaluate whether their existing insurance policies adequately cover the legal challenges posed by generative AI. As AI tools enhance efficiency in tasks such as research, document review [1], and drafting [1], they also introduce unique risks, including the potential for generating incorrect information with unwarranted confidence [1].

The rapid commercialization of generative AI services has introduced significant legal complexities, particularly concerning liability under tort law and product liability frameworks. As employees incorporate tools like ChatGPT into their work [2], it is crucial to consider how AI-generated outputs may be utilized and the specific risks involved [2]. For instance [2], using AI-generated artwork could pose copyright infringement risks [2], while employing AI-generated software for medical diagnoses may raise different liability and insurance coverage concerns [2]. Moreover, instances have arisen where legal professionals have cited non-existent cases due to a lack of understanding of the technology [1], underscoring the importance of comprehending AI tools and their limitations [1].

In the context of tort law [3], the tort of negligence is particularly relevant for addressing harms caused by generative AI [3]. The flexibility of negligence law allows it to adapt to new technological harms [3], suggesting that companies developing generative AI could be held liable for damages resulting from malicious use of their systems [3]. Key considerations include duty of care [3], legal causation [3], and the potential for recognizing pure economic loss as actionable harm [3]. Understanding the mechanics of generative AI is crucial for safe usage [1], as these models generate text based on statistical patterns learned from extensive datasets [1], which can lead to “hallucinations”—fabricated information presented as fact [1]. Such inaccuracies can emerge from gaps in training data or ambiguous prompts [1], making it essential for legal practitioners to verify AI-generated content to avoid significant liabilities [1].

Additionally, product liability under the Consumer Protection Act 1987 warrants examination [3], particularly regarding whether generative AI qualifies as a ‘product’ and the implications of strict liability for AI companies [3]. Recent developments in the EUs Product Liability Directive support this classification [3], indicating a possible direction for UK law [3]. To mitigate risks [1], law firms should proactively select appropriate AI tools and establish open communication channels for technology use disclosure [1]. A clear AI Usage Policy is vital [1], outlining acceptable and unacceptable uses of AI [1], permissible applications [1], and data security protocols to protect client information [1]. This policy should also include vetting procedures for third-party vendors and emphasize transparency and explainability in AI decision-making processes [1].

While existing frameworks of negligence and product liability are generally robust enough to address the challenges posed by generative AI [3], there is a pressing need to redefine regulatory boundaries [3]. This includes reconsidering established principles in negligence law and potentially adopting new liability mechanisms [3]. Continuous training on AI risks and responsible usage is necessary to foster a culture of ethical AI application [1]. Despite implementing risk management controls [1], lawyers must exercise sound judgment [1], treating AI outputs as part of the workflow while maintaining oversight and reasoning [1]. Regular audits and compliance checks are essential to ensure adherence to ethical standards and to identify unauthorized AI tools [1], thereby reducing the risks associated with shadow IT [1]. The ongoing tension between fostering innovation and managing associated risks remains a critical issue for organizations and regulators as they navigate the evolving landscape of generative AI.

Conclusion

The integration of generative AI into professional environments necessitates a careful balance between leveraging technological advancements and managing the accompanying risks. Organizations must remain vigilant in updating their legal frameworks, insurance policies [2], and internal guidelines to address the unique challenges posed by AI. By fostering a culture of ethical AI use and maintaining rigorous oversight, organizations can harness the benefits of AI while mitigating potential liabilities.

References

[1] https://lplc.com.au/resources/lij-article/managing-the-risks-of-ai-in-law-practices
[2] https://www.jdsupra.com/legalnews/5-key-takeaways-managing-the-risks-in-1944620/
[3] https://blogs.law.ox.ac.uk/oxford-university-undergraduate-law-journal-blog/blog-post/2025/02/should-generative-ai-have