Google has recently introduced new methods to label AI-generated text and videos, enhancing transparency and security in digital content creation.


Google has integrated a new method into the SynthID tool to label AI-generated text without altering it. This tool can now detect if text has been generated by an AI model by comparing token scores [1]. The SynthID feature has been deployed on Google AI chatbot Gemini [1], making it more difficult for malicious actors to misuse AI-generated content [1]. Additionally, Google has introduced a new video-generation tool called Veo in the VideoFX app [2], which will include digital watermarks generated by Google’s SynthID system [2]. This system will also be able to watermark AI-generated text produced by Gemini [2], ensuring transparency around the origins of the content [2].


These advancements in labeling AI-generated content will have a significant impact on enhancing security and transparency in digital content creation. By implementing digital watermarks and detection methods, Google is taking proactive steps to prevent misuse of AI-generated content. This will not only protect users from harmful content but also promote accountability and authenticity in online interactions. As technology continues to evolve, these measures will play a crucial role in ensuring the integrity of digital content in the future.