Major tech companies [1] [3] [5] [7], including Adobe [1] [2] [3] [4] [5] [6] [8] [9], Microsoft [1] [2] [3] [4] [5] [6] [7] [8] [9], and OpenAI [1] [2] [3] [4] [5] [8] [9], have collaborated with the US government to address image-based sexual abuse, such as non-consensual intimate images and AI deepfakes.
Description
These partnerships involve responsible sourcing of datasets, removal of explicit content from training datasets [9], and rigorous testing of development processes to prevent image-based sexual abuse. The White House has highlighted the significance of combating image-based sexual abuse [2], securing commitments from leading AI vendors as part of initiatives commemorating the 30th anniversary of the Violence Against Women Act. Industry firms like Bumble, Discord [5], Match Group [5], Meta [1] [2] [5] [6] [7] [9], and TikTok have also joined these efforts. Payment companies like Cash App and Square are monitoring and restricting related payments to combat image-based sexual abuse. Google [1] [2] [6] [9], Meta [1] [2] [5] [6] [7] [9], and Snap Inc [1] [6]. have implemented measures to combat non-consensual intimate images and tools facilitating their dissemination [2], including restricting payments for companies involved in producing or distributing abusive sexual images and enhancing reporting systems for survivors of abuse [1]. A working group comprising technology firms [2], civil society groups [2] [9], and researchers will explore interventions to prevent and mitigate the harms of image-based sexual abuse [2], showcasing a collaborative endeavor to foster a safe and secure AI environment [2]. President Biden and Vice President Harris have prioritized addressing gender-based violence [6], acknowledging the disproportionate impact of image-based sexual violations on women [6], children [2] [5] [6] [7] [8], and LGBTQI+ communities [6]. Various companies [6], such as GitHub and Common Crawl, have taken steps to tackle image-based sexual violations [6], including enhancing platforms [6], prohibiting the distribution of software tools for unauthorized image creation [6], identifying and removing unjust imagery from search results [6], and refining reporting mechanisms [6]. Several major tech companies [3], including Anthropic and Cohere [1] [3] [4] [5] [6], have committed to eliminating nude images from AI training datasets and implementing strategies to prevent AI models from perpetuating image-based sexual abuse [3], as commended by the White House Gender Policy Council.
Conclusion
The efforts of major tech companies, in collaboration with the US government, to combat image-based sexual abuse are crucial in safeguarding individuals from harm and promoting a secure digital environment. By addressing these issues, these companies are taking proactive steps to protect vulnerable populations and uphold ethical standards in AI development.
References
[1] https://fortune.com/2024/09/13/openai-anthropic-and-microsoft-white-house-commitment-to-prevent-ai-deepfakes-sexual-abuse/
[2] https://www.infosecurity-magazine.com/news/white-house-ai-abuse-images/
[3] https://www.inkl.com/news/openai-anthropic-join-white-house-pledge-on-deepfakes
[4] https://finance.yahoo.com/news/openai-anthropic-microsoft-join-white-130128123.html
[5] https://www.thestar.com.my/tech/tech-news/2024/09/13/tech-companies-commit-to-fighting-harmful-ai-sexual-imagery-by-curbing-nudity-from-datasets
[6] https://winbuzzer.com/2024/09/16/white-house-secures-ai-industrys-commitment-on-deepfakes-xcxwbn/
[7] https://san.com/media-miss/tech-companies-commit-to-fighting-harmful-ai-sexual-imagery-by-curbing-nudity-from-datasets/
[8] https://opentools.ai/news/white-house-secures-ai-vendors-pledge-to-combat-deepfake-nudes
[9] https://www.govexec.com/technology/2024/09/white-house-leads-public-private-commitment-curb-ai-based-sexually-abusive-material/399558/