Generative artificial intelligence (AI) is revolutionizing cybersecurity by leveraging vast datasets to detect and prevent cyber threats in real time. However, it also poses new challenges [7], such as the creation of deepfakes and sophisticated phishing emails. This article explores the concerns raised by the use of generative AI in cyber attacks and the steps organizations can take to mitigate these risks.


Generative AI has proven instrumental in identifying and mitigating previously unseen vulnerabilities in cybersecurity [2]. However, a recent study by cybersecurity consultancy Gemserv highlights the concern among Chief Information Security Officers (CISOs) regarding the use of deepfake AI technologies in cyber attacks [7]. The study reveals that 83% of respondents believe that generative AI will play a more significant role in future cyber attacks [7]. Additionally, 38% of respondents expect a significant increase in attacks utilizing these technologies [7] [8], while 45% anticipate a moderate rise over the next five years [7]. However, only 16% of respondents believe their organizations have an excellent understanding of these advanced AI tools [7] [8].

One emerging risk is the unauthorized use of generative AI tools by employees, known as shadow AI. This poses a significant threat as it can accumulate sensitive company data [1], potentially damaging corporate reputation if exposed. To address this issue [1], organizations should provide secure generative AI tools and implement policies for data uploaded [1]. It is crucial to educate employees about the risks of using generative AI and enforce cybersecurity policies [1]. CISOs should reassess identity and access management capabilities to monitor for unauthorized AI solutions [1]. By taking proactive measures and using updated security tools [1], organizations can ward off shadow AI and benefit from the transformative value of generative AI while avoiding security breaches [1].

Despite the widespread use of generative AI tools by employees, many organizations lack plans to counter the security risks associated with these tools [5]. Concerns about inaccurate answers from generative AI apps often overshadow security-centric issues [5]. While some organizations have banned generative AI tools [5], these bans may not be effective as employees still use them [5]. Many organizations also lack technology for monitoring AI tool usage and provide limited user training and governance policies [5]. Enforcing bans on generative AI tools is challenging [5], and organizations need to combine education with technical controls to address this challenge [5]. Clear communication about the dangers of using AI is crucial [5]. The lack of knowledge and resources available to inform policies and training is a challenge for IT leaders [5]. Clear directives can provide business leaders with assurance when establishing governance and policies for the utilization of generative AI tools [5]. The use of generative AI tools is still in its infancy [5], and there are questions that need to be addressed to ensure data privacy and compliance [5]. The IT security community has a role to play in understanding the potential risks and implementing guardrails and policies to protect privacy and data security [5]. CISOs and CIOs must balance the need to restrict sensitive data from generative AI tools with the need for businesses to use these tools to improve processes and increase productivity [5]. Many generative AI tools offer enhanced privacy protection [5], but organizations still need to ensure compliance with relevant requirements [5].

To address the potential dangers of generative AI, OpenAI has formed a Preparedness team to assess risks such as chemical, biological [6], and radiological threats [6], autonomous replication [6], and the generation of malicious codes. They will also evaluate the algorithm’s ability to persuade and fool humans [6], as seen in phishing attacks. The Biden administration’s concerns over the rapid development of generative AI prompted this evaluation [6]. Additionally, Google is expanding its Vulnerability Rewards Program (VRP) to include generative AI-specific attack scenarios and revising its bug categorization and reporting policies [3]. The company is also launching the Secure AI Framework (SAIF) to build trustworthy applications and working with the Open Source Security Foundation to protect the integrity of AI supply chains [3]. Generative AI raises concerns such as unfair bias [3], model manipulation [3], and misinterpretations of data [3].


Generative AI presents both benefits and challenges in the cybersecurity landscape [4]. As its use continues to grow, organizations must be aware of the implications for future regulations and how to integrate this emerging technology [4]. Mitigating the risks associated with generative AI requires secure tools, policies [1] [3] [5], and education for employees. The IT security community plays a crucial role in understanding and addressing potential risks. By taking proactive measures and collaborating with industry experts, organizations can harness the transformative value of generative AI while ensuring data privacy and compliance.