Generative artificial intelligence (AI) has the potential to both positively and negatively impact cybersecurity. While advocates believe it can help defend against cyber threats [2], skeptics fear it may increase security incidents. This article explores the use of generative AI in cybersecurity and highlights the importance of using it judiciously alongside human expertise.


Generative AI is already being utilized to automate tasks and shift the focus of security teams to more strategic projects [2]. It brings value to cybersecurity by aiding developers in creating secure applications [3], offering actionable remediation paths for security vulnerabilities [3], and prioritizing important assets within organizations [3]. By training on relevant data sets [3], generative AI can shift the security approach from reactive to proactive [3], enabling predictive and accurate decision-making [3].

However, concerns arise regarding overreliance on AI, which can lead to complacency and security gaps. Large language models (LLMs) like ChatGPT and DALL-E2 can produce misleading guidance due to their lack of contextual understanding and reliance on statistical analysis. The inconsistency and inaccuracy of LLM outputs pose challenges for their practical application in cybersecurity [2]. Therefore, it is crucial to use generative AI judiciously and in conjunction with Bayesian machine learning models for safer automation. Human judgment, nuance [2], and contextual understanding remain essential in cybersecurity, and generative AI should be used to augment security talent rather than replace it [2].

Companies are increasingly adopting generative AI in various aspects of their business [4]. However, the use of unsanctioned generative AI tools, known as bring your own AI (BYOAI) [4], poses risks. Employees accessing external AI services for company-related tasks can lead to security vulnerabilities, data loss [3] [4], and copyright violations [4]. Unlike bring your own device (BYOD) policies [4], controlling BYOAI is challenging as employees can access AI tools through websites [4]. Employers must consider these risks and find ways to mitigate them [4]. Investing in solving repetitive and complex problems [3], educating decision-makers [3], allocating budget and resources [3], and developing a playbook for corrective actions are crucial steps in incorporating generative AI for security. Data privacy and leakage are significant risks that can be mitigated through internal hosting of models and anonymization of data [3]. While generative AI can automate threat detection [3], security patches [1] [2] [3], and incident response [3], it may take longer to evolve for threat modeling and automated patch deployment [3]. Establishing policies around data protection and aligning them with overall cybersecurity strategies is essential for organizations.


Generative AI has the potential to revolutionize cybersecurity by automating tasks [3], improving decision-making, and enhancing overall security capabilities. However, it is important to approach its implementation with caution. Mitigating risks associated with unsanctioned use of generative AI tools, such as BYOAI, is crucial for maintaining data security and preventing copyright violations. Organizations should prioritize data privacy and leakage concerns by hosting models internally and anonymizing data. While generative AI can automate certain aspects of cybersecurity, human expertise and judgment remain indispensable. Striking a balance between generative AI and human talent is key to maximizing its benefits and ensuring a robust cybersecurity strategy. Additionally, ongoing research and development are necessary to address the challenges and limitations of generative AI in threat modeling and automated patch deployment. By doing so, organizations can harness the full potential of generative AI while safeguarding against potential security breaches and threats.