Introduction
The integration of large language models (LLMs) and generative AI tools, such as OpenAI’s ChatGPT and specialized legal AI applications like Harvey and Thomson Reuters CoCounsel [4], is becoming increasingly significant in the legal sector [4]. These technologies offer potential enhancements in various legal tasks but also present challenges, particularly concerning the accuracy and reliability of AI-generated information. The Chartered Institute of Arbitrators (CIArb) has developed guidelines to address these issues [1] [5], aiming to responsibly incorporate AI into arbitration processes while maintaining procedural integrity.
Description
Large language models (LLMs) and generative AI tools [4], such as OpenAI ChatGPT and bespoke legal AI applications like Harvey and Thomson Reuters CoCounsel [4], are becoming increasingly relevant in the legal sector [4]. These technologies can enhance various aspects of legal work, including data analysis [5], research [1] [5] [6] [7], translation [1] [5] [6] [7], interpretation [1] [2] [3] [4] [6] [7], transcription [1] [5] [7], evidence collection [7], and case analysis [1] [7]. However, they are also prone to “hallucinations,” generating false or misleading information. Studies indicate that general-purpose chatbots may hallucinate between 58% and 82% of the time on legal queries [4], while bespoke tools can produce incorrect information over 17% of the time [4]. This challenge is particularly significant in arbitration, where the integrity and accuracy of processes are paramount.
The Chartered Institute of Arbitrators (CIArb) has published the “Guideline on the Use of AI in Arbitration (2025)” to provide practical guidance for practitioners in dispute resolution who aim to responsibly incorporate AI into the arbitration process [7]. These guidelines recognize both the potential benefits of AI, such as enhanced efficiency and improved quality [4], and the associated risks, including algorithmic biases [1], confidentiality concerns [2] [4], data security [1] [4] [5] [6], and implications for the enforceability of arbitral awards. The CIArb emphasizes the necessity of independent verification of AI-generated content [6], urging arbitrators to utilize traditional legal research methods to ensure that AI complements rather than replaces human judgment [6]. This approach is vital for maintaining the integrity of arbitration processes [6], especially given the opacity of many AI systems [3], which can complicate the verification of their outputs.
Data security is another critical concern in the use of AI within arbitration [6]. The CIArb guidelines advocate for robust data protection measures and transparency in AI’s data handling practices [6], which are essential for safeguarding sensitive arbitration data and fostering trust among parties [6]. Implementing procedural safeguards [6], such as enhanced encryption and strict access controls [6], is crucial to prevent unauthorized data access and mitigate data breach risks [6].
The guidelines clarify that the responsibility of parties and arbitrators remains intact despite the use of AI tools [2]. Participants are encouraged to evaluate the benefits and risks of AI technologies and to be aware of relevant laws and regulations [2]. Arbitrators have the authority to regulate AI use within the arbitration process [2], including issuing procedural orders and appointing AI experts [2], while respecting party autonomy regarding the choice and application of AI tools [2]. In cases of disputes over AI use [2], arbitrators can rule on the admissibility of AI-generated evidence and address challenges related to AI-assisted analysis [2].
To facilitate the responsible use of AI in international arbitration [6], the CIArb has introduced model agreements and procedural orders [6]. These templates provide clear mechanisms for defining the permitted AI applications and contexts [6], helping to prevent misunderstandings and disputes [6]. By ensuring compliance with guidelines on transparency [6], accountability [4] [6] [7], and ethical AI use [6], these agreements enhance the integrity of the arbitration process [6]. However, the guidelines lack clarity on the specifics of what should be disclosed regarding AI tools [3], raising questions about the extent of information required and the implications of non-disclosure [3], particularly when AI-generated recommendations are central to the arguments [3].
Recent surveys reveal a growing reliance on AI tools among practitioners for tasks like translation and document review [6], which can enhance efficiency and reduce costs [6]. However, concerns about AI hallucination remain prevalent [6], with a significant percentage of respondents expressing anxiety over the potential for misleading AI outputs. This highlights the urgent need for clearer guidelines and frameworks to manage AI applications in arbitration [6], emphasizing the importance of transparency and verification of AI-generated information [6]. Effective transparency must extend beyond mere acknowledgment of AI tools to include insights into their functioning and potential impacts on dispute outcomes [3].
There is also a strong demand for increased regulatory oversight of AI in arbitration [6], reflecting a recognition of the need for educational initiatives to improve AI literacy among legal professionals [6]. The CIArb guidelines address the risks associated with AI hallucination by emphasizing the importance of transparency and human oversight [6]. By recommending independent verification of AI-generated information [6] [7], the guidelines aim to prevent undue influence on arbitration outcomes [6], ensuring that decision-making is based on verified facts [6].
The integration of AI in arbitration raises regulatory challenges [6], necessitating a balance between leveraging technological benefits and maintaining procedural integrity [6]. The CIArb guidelines provide a framework for parties to define and regulate AI use [6], ensuring that AI enhances rather than undermines the arbitration process [6]. This proactive approach aligns with global trends advocating for clear and enforceable standards in AI usage [6].
The development of global institutional AI guidelines reflects an ongoing effort to balance technology’s capabilities with ethical responsibilities [6]. The CIArb’s comprehensive guidelines aim to preserve the fairness of arbitration while addressing the efficiencies and challenges posed by AI [6]. These guidelines promote independent verification and human oversight [6], ensuring that AI’s role is clearly delineated and that its use is transparent and accountable [6].
International cooperation is essential for effectively harnessing AI’s potential while minimizing risks [6]. Various institutions are developing guidelines to address regional and technological challenges [6], aligning with global efforts to ensure responsible AI deployment [6]. The European Union’s Artificial Intelligence Act categorizes AI use in legal proceedings as high-risk [6], emphasizing the need for stringent guidelines to mitigate risks such as data security breaches and biased decision-making [6]. The CIArb guidelines complement this by promoting responsible AI use [6], ensuring that the integrity of arbitration is maintained in the AI era [6].
Conclusion
The integration of AI in the legal sector [4], particularly in arbitration [4], presents both opportunities and challenges. While AI tools can enhance efficiency and quality, they also pose risks related to accuracy, data security [1] [4] [5] [6], and ethical use [6]. The CIArb guidelines provide a framework for responsibly incorporating AI into arbitration [5], emphasizing transparency, accountability [4] [6] [7], and human oversight [6]. As AI continues to evolve, ongoing international cooperation and regulatory oversight will be crucial in ensuring that its deployment in legal contexts is both effective and ethical, preserving the integrity of arbitration processes [6] [7].
References
[1] https://www.charlesrussellspeechlys.com/en/insights/expert-insights/dispute-resolution/2025/from-algorithms-to-awards-ciarbs-new-guidelines-on-ai-for-arbitration/
[2] https://www.lexology.com/library/detail.aspx?g=8c8e3c8e-0f98-4495-bb85-e4a8ea35de1c
[3] https://iistl.blog/2025/03/25/the-ciarb-2025-guidelines-on-ai-shaping-the-future-of-arbitration/
[4] https://www.jdsupra.com/legalnews/ai-in-international-arbitration-ciarb-9250686/
[5] https://www.lexology.com/library/detail.aspx?g=89f9b585-d2c3-4502-bc75-a24d484e82dd
[6] https://opentools.ai/news/ciarb-unveils-cutting-edge-ai-guidelines-for-international-arbitration
[7] https://www.costiganking.com/insights-2/ai-in-arbitration-new-guidelines-for-a-tech-driven-future