Introduction

The integration of generative AI (GenAI) into cybersecurity presents both opportunities and challenges [7]. While security professionals generally view GenAI positively, acknowledging its potential to enhance security measures, there are also significant risks associated with its use, particularly in the context of cyber threats such as phishing and malware. This dual nature of GenAI necessitates a nuanced approach to its implementation in cybersecurity strategies.

Description

Security professionals generally view generative AI (GenAI) positively in the context of cybersecurity, with 90% believing it benefits security teams as much or more than it does threat actors [3] [4] [5] [7] [8]. However, while 46% consider GenAI a net positive [5], only 6% see it as a net negative [5], indicating a nuanced perspective on its impact. A recent report from Ivanti, titled “Generative AI and Cybersecurity: Risk and Reward,” examines how organizations are addressing the complexities of GenAI in cybersecurity [2] [3] [7] [8]. It highlights that GenAI can enhance security measures [2] [3] [8], particularly in threat detection and response capabilities [3], but also poses significant risks [3] [8], especially in the realm of phishing attacks. Notably, 45% of survey participants reported an increase in the severity of these attacks due to GenAI’s capabilities. The rise of GenAI has also introduced new opportunities for cybercriminals, with nearly all executives surveyed believing that adopting GenAI will heighten security risks within the next three years.

Emerging threats associated with GenAI include GenAI-powered malware [6], such as DeepLocker [6], which utilizes advanced obfuscation techniques to evade detection [6]. A study revealed that a GPT-4-based agent successfully exploited 87% of “one-day” vulnerabilities and publicly disclosed vulnerabilities lacking available patches [6]. The ability of bad actors to impersonate voices [6], faces [6], and personalities enhances the credibility of their attacks [6], posing significant risks to organizations [6].

Despite the critical role of training in cybersecurity [1] [2] [3] [4] [7] [8], many organizations have not updated their strategies to address AI-driven threats [2] [3] [7] [8]. Only 32% of those utilizing anti-phishing training find it “very effective,” even though 57% employ such training to combat sophisticated social-engineering attacks [4]. Furthermore, 72% of respondents indicate that their IT and security data remain isolated in silos [4] [7], which hampers effective threat management and diminishes the potential of GenAI in enhancing security measures. To defend against these GenAI-driven threats [6], organizations must implement a robust security framework that includes updates to governance [6], risk [1] [2] [3] [4] [6] [8], and compliance strategies [6]. As AI regulations tighten [6], embedding security principles from the outset of GenAI projects is essential for fostering innovation while ensuring a strong security foundation [6].

The report also underscores the ongoing cybersecurity talent shortage, estimated at 4.8 million professionals globally [1] [3]. One in three security professionals cites a lack of skills as a major challenge [1] [3] [4] [7] [8], suggesting that GenAI could help bridge this gap by enhancing team productivity [3] [7]. In fact, 85% of respondents expect these tools to significantly improve their efficiency, provided that organizations invest in upskilling their cybersecurity workforce [2] [3] [7] [8]. Data security is critical [6], as data serves as the cornerstone for GenAI models and is a prime target for cyberattacks [6]. Manipulation of data can lead to misguided business decisions [6], creating new legal [6], security [1] [2] [3] [4] [5] [6] [7] [8], and privacy challenges [6].

The research involved a survey of over 14,500 executives [3] [7], IT and security professionals [1] [2] [3] [4] [5] [7] [8], and office workers to assess how organizations manage AI in cybersecurity and the necessary processes [7], technology [6] [7], and talent required to strengthen defenses [7]. Organizations often initiate GenAI projects with a single data source [6], but to fully leverage GenAI’s potential [6], access to data across multiple distributed systems and formats is necessary [6]. For instance [6], a customer service chatbot requires information from various systems [6], including enterprise resource planning and customer relationship management [6]. Established security measures such as authentication [6], encryption [6], and masking remain vital [6], alongside robust access controls to protect sensitive data [6]. Implementing role-based or attribute-based access control and utilizing data tags for sensitivity are crucial steps [6].

As GenAI continues to evolve, organizations are urged to adapt their cybersecurity strategies to leverage its benefits while mitigating associated risks. A logical data management approach can address these challenges by consolidating disparate data through metadata [6], providing a unified view while maintaining security and governance [6]. This approach supports user and role-based authentication and authorization [6], with options for row-based and column-based security [6], including data masking [6]. It also enables tracking of data lineage and queries [6], aiding in regulatory compliance [6]. Secure GenAI initiatives must begin with trusted data [6], and incorporating a logical data management strategy early in GenAI projects can help mitigate security threats and ensure robust data governance [6].

Conclusion

The integration of GenAI into cybersecurity strategies offers both significant benefits and notable risks. While it can enhance threat detection and response capabilities [3], it also introduces new vulnerabilities, particularly in phishing and malware attacks. To effectively harness GenAI’s potential, organizations must update their cybersecurity frameworks [4], invest in workforce upskilling, and adopt comprehensive data management strategies. As AI regulations evolve [6], embedding security principles from the outset of GenAI projects will be crucial in balancing innovation with robust security measures.

References

[1] https://vmblog.com/archive/2024/12/03/ivanti-research-finds-phishing-tops-list-of-growing-cyber-threats-fueled-by-genai.aspx
[2] https://markets.financialcontent.com/stocks/article/bizwire-2024-12-3-ivanti-research-finds-phishing-tops-list-of-growing-cyber-threats-fueled-by-genai
[3] https://finance.yahoo.com/news/ivanti-research-finds-phishing-tops-050100303.html
[4] https://www.businesswire.com/news/home/20241202636068/en/Ivanti-Research-Finds-Phishing-Tops-List-of-Growing-Cyber-Threats-Fueled-by-GenAI
[5] https://www.infosecurity-magazine.com/news/security-pros-genai-attack/
[6] https://www.rtinsights.com/with-great-innovation-comes-great-risk-the-impact-of-genai-on-cybersecurity/
[7] https://www.innovationopenlab.com/news-biz/37683/ivanti-research-finds-phishing-tops-list-of-growing-cyber-threats-fueled-by-genai.html
[8] https://hrtechcube.com/phishing-tops-list-of-growing-cyber-threats-fueled-by-genai-ivanti/