Introduction

The updated guidance for judicial office holders on the use of Artificial Intelligence (AI) emphasizes the importance of maintaining the integrity of the justice system while incorporating AI tools. It addresses concerns about misinformation [3] [4] [6], bias [2] [3] [4] [6], and data quality [3] [4] [6], and outlines the responsibilities of judges and legal representatives in using AI-generated information.

Description

Senior judges have released updated guidance for judicial office holders regarding the use of Artificial Intelligence (AI) [3] [4], replacing the previous document from December 2023 [6]. This new guidance expands the glossary of common terms and definitions [3] [4], addressing concerns related to misinformation [3] [4], bias [2] [3] [4] [6], and the quality of datasets [3] [4] [6]. Judges are reminded of their personal responsibility for the AI-generated information they present in court [3] [4], similar to any other type of evidence [3] [4], and they are advised to inform litigants of this accountability.

The guidance emphasizes the importance of ensuring that the use of AI aligns with the obligation to maintain the integrity of the justice system and promote open justice and public confidence [6]. Judges are encouraged to possess a fundamental understanding of AI’s capabilities and limitations [2], prioritize confidentiality and privacy [2] [5], and remain vigilant about bias [2]. It highlights the availability of Microsoft’s private AI tool [3] [4] [6], Copilot Chat [3] [4] [5] [6], which is now included in standard updates on judicial devices [3] [4]. Data entered into Copilot Chat remains secure and private for users logged into their eJudiciary accounts [3] [4] [6]. A guide to using Copilot Chat has been provided by the Judicial College [3] [4] [6]. However, while the tool is described as secure, its use is not explicitly encouraged [1].

Additionally, the guidance addresses the use of AI tools by court users [5], including unrepresented litigants [5], and underscores that all legal representatives must ensure the accuracy and appropriateness of the materials they present to the court or tribunal [5]. Key indicators that a party may have utilized AI include cases that are unfamiliar or contain unusual citations [1], differing bodies of case law on the same issues [1], and submissions with American spelling or references to overseas cases [1]. While there is no obligation to disclose the use of AI [5], it suggests that lawyers should confirm the independent verification of any AI-generated research or case citations [5]. For unrepresented litigants [5], it may be necessary to inquire about the accuracy checks performed and to inform them of their responsibility for what they present [5].

The guidance also addresses the risks associated with AI, including the potential for “fake material,” challenges posed by deepfake technology, and unintentional forgeries [1], such as fictitious citations or legal texts [1]. Users are cautioned that anything typed into AI tools could become publicly known [1], and they should treat any uploaded confidential information as a data breach [1]. Overall, the updated guidance refines previous recommendations and indicates a continued embrace of AI within the judiciary [5], while reiterating that AI tools are not recommended for legal research or analysis tasks [5], signaling that AI-generated judicial legal reasoning is still not feasible [5].

This updated guidance is applicable to all judicial office holders under the jurisdiction of the Lady Chief Justice and Senior President of Tribunals [3] [4] [6], including clerks [1] [3] [4] [6], judicial assistants [3] [4] [6], legal advisers [3] [4] [6], and support staff [3] [4] [6]. The document has been issued by key judicial figures [3] [4], including the Lady Chief Justice of England & Wales and the Master of the Rolls [3] [4].

Conclusion

The updated guidance reflects a cautious yet progressive approach to integrating AI within the judiciary. By addressing potential risks and emphasizing accountability, it aims to safeguard the integrity of the justice system while acknowledging the evolving role of AI. The guidance underscores the need for judicial office holders to remain informed and vigilant, ensuring that AI tools are used responsibly and effectively in legal proceedings.

References

[1] https://www.legalcheek.com/2025/04/judges-given-guidance-on-how-to-spot-ai-generated-submissions/
[2] https://www.osborneclarke.com/insights/ai-guidance-published-judges
[3] [https://www.localgovernmentlawyer.co.uk/litigation-and-enforcement/400-litigation-news/60684-courts-and-tribunals-judiciary-issues-refreshed-guidance-on-use-of-ai-by-judicial-office-holders](https://www.localgovernmentlawyer.co.uk/litigation-and-enforcement/400-litigation-news/60684-courts-and-tribunals-judiciary-issues-refreshed-guidance-on-use-of-ai-by-judicial-office-holders)?tmpl=component
[4] https://www.localgovernmentlawyer.co.uk/litigation-and-enforcement/400-litigation-news/60684-courts-and-tribunals-judiciary-issues-refreshed-guidance-on-use-of-ai-by-judicial-office-holders
[5] https://www.taylorwessing.com/en/insights-and-events/insights/2025/04/dqr-updated-judicial-guidance-on-the-use-of-ai
[6] https://www.judiciary.uk/guidance-and-resources/artificial-intelligence-ai-judicial-guidance-2/