Introduction

In the contemporary digital landscape, the proliferation of misinformation and disinformation, particularly through the misuse of deepfakes and artificial intelligence [3], poses significant challenges. Establishing international standards and practical tools is crucial for fostering trust and resilience. The AI and Multimedia Authenticity Standards Collaboration (AMAS) has taken a proactive role in addressing these issues by introducing key papers that guide the global governance of AI.

Description

International standards and practical tools are essential for building trust and resilience in the face of rapidly spreading misinformation and disinformation [6], particularly concerning the misuse of deepfakes and artificial intelligence in facilitating fraudulent activities [3]. The AI and Multimedia Authenticity Standards Collaboration (AMAS) has introduced two key papers aimed at addressing these pressing issues and guiding global governance of AI. The first paper provides a detailed overview of existing standards and specifications related to digital media authenticity and artificial intelligence [1] [3] [4], while the second offers policymakers guidance on utilizing international standards to effectively regulate the creation [1] [3] [4], use [1] [2] [3] [4], and dissemination of synthetic multimedia content [1] [3] [4].

AMAS emphasizes the necessity of protecting information integrity [3] [4], upholding individual rights [1] [2] [3] [4], and fostering trust in the digital ecosystem through the establishment of robust technical standards that support regulatory frameworks [3] [4]. The initiative aims to enable users to trace the origin of AI-generated and altered content while promoting creativity [1] [3] [4]. A structured roadmap is provided to help policymakers and regulators navigate the complexities of AI-generated and manipulated content [6], focusing on prevention [6], detection [5] [6], and response strategies [6]. Striking a balance between preserving freedom of expression and fostering innovation while protecting society from manipulated media is crucial [6]. A regulatory options matrix outlines what to regulate [6], how [6], and to what extent [6], while supporting tools such as standards and conformity assessments promote regulatory coherence across borders [6].

To further support regulatory efforts, checklists are available to assist regulators and tech providers in designing effective regulations and enforcement mechanisms [6], developing resilient technologies [6], and preparing for crises [6]. The UK government has identified deepfakes as a significant online challenge [5], with detection prioritized by the regulator Ofcom [5]. As AI-generated media evolves [6], global collaboration among diverse stakeholders [6], including major tech companies like Adobe and Microsoft, as well as research institutions such as Germany’s Fraunhofer Institute and the Swiss Federal Institute of Technology [5], is vital [6].

The upcoming International AI Standards Summit [1] [2] [4], scheduled for December 2-3, 2025 [4], in Seoul [4], will enhance global dialogue on AI governance and the establishment of global standards for inclusive and responsible AI development [2]. This summit [4], organized by IEC [4], ISO [4], and ITU [4], will convene key stakeholders and experts to lay the groundwork for advancing the development of global standards that encourage ethical practices in AI [4]. Digital content must be traceable [6], trustworthy [6], and ethically produced [6]. Experts in multimedia authenticity are gathering feedback to align technical and policy efforts [6], with international dialogues shaping a shared understanding of risks and best practices [6]. Collaboration among organizations is necessary to address challenges like misinformation, with initiatives focusing on content provenance, trust and authenticity [5], asset identifiers [5], rights declarations [5], and watermarking [5].

The World Economic Forum has highlighted disinformation as a critical global risk for 2025 [5], noting the substantial reputational and financial damage it can inflict on businesses of all sizes [5]. Inclusive international standards will reflect real needs and facilitate trust in online content [6]. The ongoing partnership aims to expand its reach and influence the standards that will shape the future of digital media [6], protecting information integrity and upholding individual rights while allowing users to identify the origins of AI-generated content without hindering creativity [2]. The papers from AMAS are accessible for download from their website, further promoting transparency and engagement in these vital discussions.

Conclusion

The efforts of AMAS and its collaborators underscore the importance of establishing comprehensive international standards to combat misinformation and disinformation in the digital age. By fostering global cooperation and dialogue, these initiatives aim to protect information integrity [2], uphold individual rights [1] [2] [3] [4], and promote trust in digital content. The proactive measures and strategic frameworks developed will have far-reaching implications, ensuring that AI and digital media are used ethically and responsibly, ultimately safeguarding society from the adverse effects of manipulated media.

References

[1] https://www.eejournal.com/industry_news/ai-and-multimedia-authenticity-standards-collaboration-launches-two-papers-to-guide-future-of-ai-integration-today-at-the-ai-for-good-global-summit/
[2] https://www.iso.org/news/2025/07/ai-for-good-global-summit-2025
[3] https://betanews.com/2025/07/11/international-collaboration-aims-to-combat-deepfakes-and-ai-misuse/
[4] https://digitalforensicsmagazine.com/ai-for-good-global-summit/
[5] https://www.biometricupdate.com/202507/un-initiative-unites-standards-bodies-to-tackle-global-deepfake-threat
[6] https://www.itu.int/hub/2025/07/standards-and-policy-considerations-for-multimedia-authenticity/