Introduction
The rapid advancement of artificial intelligence (AI) necessitates the establishment of global standards to ensure its responsible development and deployment. These standards are crucial for reducing costs [1], enhancing affordability [1], and promoting the widespread adoption of reliable AI technologies [1]. They also play a vital role in safeguarding fundamental rights and ensuring that AI serves the interests of all individuals. International organizations [1] [2], legal frameworks [1] [2], and collaborative initiatives are pivotal in this process, emphasizing transparency, responsibility [1] [2], and inclusivity in AI governance.
Description
As artificial intelligence (AI) continues to advance [1], the establishment of global standards is crucial for reducing costs [1], enhancing affordability [1], and promoting the widespread adoption of reliable AI technologies [1]. These standards are vital for ensuring that AI serves the interests of all individuals while safeguarding fundamental rights [1]. International organizations play a key role in this process, emphasizing transparency, responsibility [1] [2], and inclusivity in AI governance.
Legal frameworks often lag behind technological advancements [1], with only a few countries enacting laws governing AI [3]. The European Union is at the forefront with its comprehensive AI Act, while the United States relies on voluntary compliance and China emphasizes social stability and state control [3]. The concentration of AI development among a few multinational companies raises concerns about the imposition of technology without public input [3]. However, technical standards can swiftly align with national policy objectives [1], adapting to the rapid pace of private-sector innovation [1]. The International Telecommunication Union (ITU) has already published 120 AI standards [1], with an additional 130 in development [1], which are essential for fostering AI development in a manner that meets societal and environmental needs [1].
AI’s integration into various sectors [1], including finance [1], healthcare [1], and education [1], has raised concerns about biases and malfunctions [1], particularly in critical applications like medical diagnostics and autonomous vehicles [1]. There are significant disparities in AI capabilities across different countries and regions [1], with imbalances in patent ownership [1], tool development [1] [2] [3], and data processing that could exacerbate global inequalities [1]. To address these challenges [1] [2], ongoing dialogue and collaboration among nations are essential for standardizing AI regulations and ensuring ethical considerations are integrated into AI systems worldwide.
Initiatives such as the AI Skills Coalition and the Digital Infrastructure Investment Initiative aim to bridge gaps in skills and infrastructure [1], with a focus on enhancing digital capabilities globally [1]. The potential of AI to contribute positively to global goals [1], such as the UN Sustainable Development Goals [1], underscores the need for responsible governance and standards that mitigate risks while maximizing benefits [1].
The UN has proposed several recommendations to enhance AI governance, including initiating a new policy dialogue [3], establishing an AI standards exchange [3], and creating a global AI capacity development network to improve governance capabilities [3]. Additionally, the establishment of a global AI fund is suggested to address collaboration and capacity gaps [3], along with a global AI data framework to ensure transparency and accountability [3]. A small AI office is also proposed to support and coordinate the implementation of these initiatives.
Standards will influence AI design and usage [1], impacting privacy [1], freedom of expression [1], and access to information [1]. The UN Human Rights Council has emphasized the importance of integrating human rights considerations into technical standard-setting processes [1]. Collaborative efforts among governments [1], industry [1], civil society [1], and technical experts are essential for developing transparent and inclusive standards [1]. The integration of ethical frameworks into the design process of AI systems is vital, utilizing methodologies such as value-sensitive design to reflect societal values [2].
The World Standards Cooperation [1], a partnership between ITU [1], the International Organization for Standardization (ISO) [1], and the International Electrotechnical Commission (IEC) [1], is committed to embedding human rights perspectives into technical standards [1]. This collaboration extends to initiatives addressing climate change [1], where AI can play a role in monitoring and reducing greenhouse gas emissions [1]. The Declaration on Green Digital Action [1], endorsed by numerous countries and organizations [1], highlights the importance of digital technologies in climate discussions [1]. The Coalition for Sustainable Artificial Intelligence further reinforces the commitment to environmentally friendly AI practices [1].
Efforts to harness AI for specific sectors [1], such as health [1], agriculture [1], and disaster management [1], are underway [1], with collaborations involving various UN entities [1]. Additionally, initiatives focused on combating misinformation through AI watermarking and deepfake detection are being developed [1].
The Global Digital Compact calls for enhanced international governance of AI [1], urging standard-setting organizations to promote interoperable AI standards that prioritize safety [1], reliability [1], sustainability [1] [3], and human rights [1]. Upcoming events [1], including the International AI Standards Summit and the AI for Good Global Summit [1], will focus on advancing these standards [1].
Recognizing the risks associated with AI growth [1], there is also a significant opportunity for economic value creation [1], particularly through generative AI [1]. Establishing robust technical standards is essential for capitalizing on the potential of AI while ensuring that its development benefits all stakeholders [1]. Policymakers and researchers should work together to refine AI ethics guidelines through continuous dialogue [2], common standards [1] [2], and capacity-building initiatives [2], ensuring that ethical considerations remain central to AI development and deployment [2].
Conclusion
The establishment of global AI standards is imperative for ensuring that AI technologies are developed and deployed responsibly, with a focus on reducing costs, enhancing affordability [1], and safeguarding fundamental rights [1]. These standards will significantly impact AI design and usage, influencing privacy, freedom of expression [1], and access to information [1]. Collaborative efforts among international organizations, governments [1], industry [1], and civil society are essential for developing transparent and inclusive standards [1]. By integrating ethical frameworks and human rights considerations into AI systems, we can harness AI’s potential to contribute positively to global goals while mitigating risks and maximizing benefits.
References
[1] https://www.itu.int/hub/2025/04/standards-help-unlock-trustworthy-ai-opportunities-for-all/
[2] https://www.restack.io/p/ai-governance-answer-ai-standards-global-development-cat-ai
[3] https://thefinancialexpress.com.bd/sci-tech/un-advisory-body-makes-seven-recommendations-for-governing-ai