Google has expanded its Vulnerability Rewards Program (VRP) to include compensation for researchers who find attack scenarios specific to generative artificial intelligence (AI) systems [10]. This expansion aims to incentivize research around AI safety and security [2] [3] [4] [5] [9].


To further promote responsible AI practices and encourage research in AI safety, Google has extended its VRP to include vulnerabilities specific to generative AI [1]. This expansion addresses challenges such as biases, model manipulations [1] [2] [4] [7] [8] [10] [11], data misinterpretations [1] [4] [7] [10], and adversarial attacks [1]. The company hopes that by incentivizing more security research and applying supply chain security to AI [11], collaboration with the open source security community will increase and ultimately make AI safer for everyone [11].

In addition to the VRP expansion, Google has established an AI Red Team and is working on strengthening the AI supply chain through open-source security initiatives [10]. They are collaborating with the Open Source Security Foundation to ensure the integrity of AI supply chains [1]. OpenAI has also formed a Preparedness team to protect against risks to generative AI [10], and Google [8] [10] [11], OpenAI [1] [10], Anthropic [10], and Microsoft have created a $10 million AI Safety Fund for research in this field [10].

Furthermore, Google has provided more information on its reward criteria for reporting bugs in AI products [1], making it easier for users to determine what is within scope [1]. The company has introduced the Secure AI Framework to support the development of responsible and safe AI applications [1]. In 2022 [3] [8], Google paid out over $12 million in rewards to security researchers [3] [5] [6], with the monetary rewards for vulnerabilities varying based on severity [3], and a maximum reward of $31,337 for highly sensitive applications [3].

This expansion of the VRP comes as AI companies, including Google [2], commit to greater discovery and awareness of AI vulnerabilities [2]. Additionally, President Biden is reportedly set to issue an executive order that will establish strict assessments and requirements for AI models used by government agencies [2]. Google is expanding its Bug Hunter Program to include third-party discovery and reporting of issues and vulnerabilities specific to its AI systems [6]. Last year [5] [6], Google issued over $12 million in rewards to security researchers for testing its products for vulnerabilities [6]. The company has now published more details on the new reward program elements [6], which are expected to encourage greater collaboration in the future [6]. Google has also identified common tactics and procedures that real-world adversaries may use against AI systems [6], and has provided criteria for bug reports to assist the bug hunting community in testing the safety and security of AI products [6]. The scope of the program includes traditional security vulnerabilities as well as risks specific to AI systems [6]. The reward amounts are dependent on the severity of the attack scenario and the type of target affected [6]. Google is committed to working with the research community to discover and fix security and abuse issues in its AI-powered features [6].


The expansion of Google’s VRP to include vulnerabilities specific to generative AI demonstrates the company’s commitment to AI safety and security. By incentivizing research and collaboration, Google aims to make AI safer for everyone [4]. The establishment of an AI Red Team, collaboration with the Open Source Security Foundation [1] [11], and the creation of an AI Safety Fund further emphasize the importance of addressing AI vulnerabilities. With President Biden’s executive order on AI model assessments, the focus on AI security is expected to increase. Google’s Bug Hunter Program and the publication of reward program details will encourage greater collaboration and testing of AI products. Google’s commitment to working with the research community highlights the ongoing effort to ensure the safety and security of AI-powered features.