Introduction

On 7 February 2025 [3], the Organisation for Economic Co-operation and Development (OECD) introduced the Hiroshima AI Process Reporting Framework (HAIP). This initiative is designed to monitor adherence to the International Code of Conduct for Organizations Developing Advanced AI Systems [5], which was established in 2023 as part of the G7 Hiroshima AI Process. The framework aims to enhance international AI governance, ensuring the safe [1] [6], trustworthy [3] [6], and responsible development of advanced AI systems [1] [3], while addressing the urgent need for international agreements to prevent misuse and leverage AI for global societal benefit.

Description

On 7 February 2025 [3], the OECD launched the Hiroshima AI Process Reporting Framework (HAIP) to monitor compliance with the International Code of Conduct for Organizations Developing Advanced AI Systems [5], which was adopted in 2023 as part of the G7 Hiroshima AI Process. This initiative enhances international AI governance and reinforces the G7’s commitment to the safe [3], trustworthy [3] [6], and responsible development of advanced AI systems [1] [3]. It addresses the urgent need for international agreements to prevent misuse and leverage AI for the benefit of global society, particularly in light of emerging challenges such as risks associated with generative AI and disinformation.

The framework aims to standardize AI risk management practices globally, providing actionable guidelines for developers of advanced AI systems [6]. It enables organizations to report on their AI risk management practices, including risk assessment [1], incident reporting [1] [3] [4], and information sharing mechanisms [1] [3], thereby promoting trust and accountability in the development of advanced AI technologies [1]. Leading AI developers [3], including Amazon [3] [4], Google [3] [4], Microsoft [3] [4], and OpenAI [3] [4], have committed to participating in this inaugural framework [3], with initial reports due by 15 April 2025 [3], followed by annual updates to ensure ongoing relevance and adaptability [4].

Built on 11 actions from the Code of Conduct [3], the framework offers clear guidance for organizations in their reporting efforts [3]. An online platform facilitates the easy submission of reports [3], making this information publicly accessible [3]. A notable feature of the framework is its emphasis on interoperability with various international AI governance mechanisms [3], promoting consistency across global standards and reducing redundancy in reporting requirements [3]. The development of this tool also benefited from contributions by an international consortium, including DFKI [5], ensuring its practicality and effectiveness [3].

Developed through multistakeholder cooperation [3], the framework incorporates input from the private sector [3], academia [3], and civil society [3]. Beyond serving as a reporting mechanism [3], it aims to share best practices and foster continuous improvement in AI development [3]. Participating organizations will contribute to a growing repository of knowledge on effective risk management strategies and responsible AI development approaches [3]. This initiative is crucial for promoting transparency and accountability in the AI industry [3], with its success dependent on ongoing engagement from the AI community and continuous refinement based on implementation experiences [3].

In addition, the OECD has established a notification system to ensure that generative AI developers comply with their commitments under the Hiroshima Process Code of Conduct [2]. The framework is underpinned by global principles for artificial intelligence agreed upon by G7 members in late 2023, which prioritize fairness [6], accountability [1] [3] [4] [6], and transparency [4] [6], serving as a foundation for responsible AI development [2]. Organizations interested in participating can access the reporting platform on the OECD AI Policy Observatory [3], which will provide updates and guidance to support reporting efforts [3].

Conclusion

The introduction of the Hiroshima AI Process Reporting Framework marks a significant step forward in international AI governance. By standardizing risk management practices and fostering transparency, the framework not only enhances trust and accountability but also encourages continuous improvement in AI development. Its success will depend on the active participation of the AI community and the ongoing refinement of practices based on real-world experiences. This initiative underscores the global commitment to harnessing AI’s potential responsibly and ethically, ensuring its benefits are realized across society.

References

[1] https://newsreel.com.au/article/business/global-framework-to-support-safer-ai-use/
[2] https://electronlibre.info/2025/02/07/oecd-announces-notification-system-for-international-code-of-conduct-on-ai/
[3] https://oecd.ai/en/wonk/how-the-g7s-new-ai-reporting-framework-could-shape-the-future-of-ai-governance
[4] https://opentools.ai/news/oecd-unveils-global-ai-framework-a-new-era-for-tech-giants
[5] https://www.dfki.de/en/web/news/oecd-presents-reporting-tool-for-advanced-ai-applications-in-paris
[6] https://www.digitaleurope.org/resources/tech7s-key-recommendations-for-the-paris-ai-action-summit/