Introduction
In recent years, the legal profession in the United States has faced challenges due to the increasing use of artificial intelligence (AI) in generating court filings. Despite judicial standing orders intended to curb the submission of documents with fabricated case citations, the problem persists, raising concerns about the integrity of legal proceedings and the responsibilities of legal professionals.
Description
A growing number of attorneys and pro se litigants in the US are submitting court filings that include fake case citations generated by AI [4], despite judicial standing orders aimed at preventing such occurrences [4]. The prevalence of these flawed filings has notably increased, particularly following incidents involving New York lawyers who utilized generative AI tools like ChatGPT and Claude, an AI chatbot developed by Anthropic [2]. Experts initially anticipated that the implementation of standing orders would reduce these issues; however, the ambiguity of the orders has rendered them largely ineffective [4], leading to rising risks associated with citing non-existent cases.
The fabrication of AI-generated case citations has emerged as a significant issue in legal practice [5], with instances of attorneys submitting briefs that include non-existent court decisions [5]. For example, some lawyers using ChatGPT included six fictitious citations with detailed quotes from judicial opinions that were entirely fabricated [5]. These fake cases [5], which had plausible names [5], were undetectable by opposing counsel who could not find them in any legal database [5]. Judicial standing orders regarding AI vary significantly across jurisdictions [4]. Some orders prohibit or require disclosure of generative AI use in court filings [4], while others merely suggest caution in its application [4]. Many of these orders lack clarity in defining the specific AI tools or use cases they address [4], often conflating general AI with generative AI [4]. This confusion has resulted in broad orders that may inadvertently encompass benign technologies like spellcheck [4]. For instance [4], a standing order issued by Judge Michael M [4]. Baylson in 2023 referenced AI tools but failed to specify generative AI [4], requiring attorneys to disclose AI use and certify citation accuracy [4].
Legal professionals [1] [2] [3] [4] [5], including Andrew Peck [4], have criticized these ambiguous orders for causing confusion and leading to noncompliance among lawyers and litigants [4]. Judges are increasingly frustrated with the appearance of AI-generated hallucinations in court documents [3], which can damage the legal profession’s reputation [3]. The risks associated with AI-generated misinformation are compounded when models are enhanced with external tools [5]. Even with web browsing capabilities [5], the fundamental text generation process remains susceptible to fabricating details [5], misattributing information [5], or blending real and imagined sources [5], leading to sophisticated misinformation [5]. Despite guidance from the American Bar Association (ABA) and discussions at legal conferences [3], instances of fake citations continue to emerge. A recent incident involving a Texas court sanctioned an attorney with a $2,500 fine for submitting a brief that included multiple fictitious cases [1], underscoring the risks associated with relying on generative AI tools without thorough verification [2]. This incident [2], along with a similar penalty imposed by the Court of Appeals for the Fifth District of Texas on an attorney for citing fictitious cases [3], highlights the ongoing issue [3].
Experts emphasize that lawyers must fulfill their fundamental responsibilities [3], including thoroughly reading the cases they cite [3], to avoid these errors [3]. The ABA’s Model Rules of Professional Conduct require lawyers to maintain competence and diligence [3], and attorneys supervising those drafting briefs also bear responsibility for ensuring accuracy [3]. The ABA has reiterated that the use of generative AI tools must align with ethical obligations regarding competence [3], confidentiality [3], and communication with clients [3]. Legal ethicists have expressed alarm over the increasing reliance on generative AI tools for critical legal work without adequate oversight [2], emphasizing that AI cannot replace human due diligence [2].
To address the fundamental issues with AI citation generation [5], architectural changes are necessary [5]. Current pattern-based approaches should be supplemented or replaced with retrieval-based systems that can only cite verifiable sources [5]. The development of verification tools for AI-generated content is progressing [5], with systems designed to cross-check citations against multiple databases [5], flag suspicious patterns [5], and provide confidence scores [5]. While these tools are not infallible [5], they can alleviate the manual verification workload significantly [5]. There is a significant gap in understanding how AI and large language models function [3], which contributes to the ongoing challenges in legal practice [3]. Training on AI usage is essential [3], but practical experience with the technology is crucial for recognizing its pitfalls [3]. Larger law firms may be better positioned to keep their training programs updated [3], but engagement from attorneys is necessary for effective learning [3].
To mitigate the risk of hallucinated citations [3], courts have begun implementing AI judicial standing orders [3], with approximately 40 US courts adopting such measures [3]. Suggestions include requiring lawyers to certify the accuracy of their AI-generated briefs and disclose their use of AI [3], which could help reduce the incidence of fake citations [3]. While imposing fines for violations could serve as a potential deterrent [4], experts argue that such measures should not be overly punitive or seen as a sustainable solution. The rise in fake AI-generated case citations has also been linked to pressures from clients and law firms [4], as well as negligence among attorneys [4], raising concerns about the persistence of this issue in the legal landscape [4]. Ultimately, the responsibility for accuracy and trust in legal proceedings rests with the attorneys [2], not the algorithms [2]. To mitigate risks [2] [3], it is essential that every sentence generated by AI is validated against credible legal sources [2], and firms should maintain audit logs of AI interactions to ensure accountability [2]. The legal profession is at a critical juncture [2], where the efficiencies offered by AI tools must be balanced with the need for careful review and ethical oversight [2].
Conclusion
The increasing reliance on AI-generated content in legal filings poses significant challenges to the integrity of the legal profession. While AI tools offer efficiencies, they also introduce risks of misinformation and ethical breaches. Legal professionals must prioritize accuracy and ethical standards, ensuring that AI-generated content is thoroughly verified. The legal community must adapt to these technological advancements by implementing clear guidelines, enhancing training, and fostering a culture of accountability to maintain trust in legal proceedings.
References
[1] https://news.bloomberglaw.com/business-and-practice/court-filings-rife-with-fake-ai-case-cites
[2] https://www.aiplusinfo.com/ai-chatbot-cites-fake-legal-case/
[3] https://news.bloomberglaw.com/legal-ops-and-tech/ai-fake-citations-expose-lawyer-sloppiness-and-training-gaps
[4] https://www.law.com/legaltechnews/2025/06/23/why-arent-judicial-standing-orders-preventing-fake-ai-case-citations-/
[5] https://medium.com/@nomannayeem/the-fabrication-problem-how-ai-models-generate-fake-citations-urls-and-references-55c052299936