A recent global survey conducted by cybersecurity vendor ExtraHop reveals that many businesses are implementing generative AI systems without sufficient oversight. This raises concerns about data leaks and inaccurate responses, highlighting the need for robust data handling and processing frameworks [3].

Description

The survey, which involved over 1200 security and IT leaders [3], found that 73% reported occasional employee use of generative AI [1]. However, less than half of organizations have established security policies for data sharing and are investing in technology to monitor generative AI use. This lack of basic security practices puts organizations at risk and makes data retrieval challenging. The survey emphasizes the need for companies to prioritize training, monitoring capabilities [1] [2], and data governance policies to ensure safe and productive usage of generative AI tools. External guidance and clear AI regulations are also desired [3].

Generative AI tools [1] [2] [3], specifically ChatGPT [2], have had a significant impact on cybersecurity [2], particularly in email attacks [2]. Attackers are leveraging generative AI to enhance their email phishing capabilities [2], leading to more effective and sophisticated attacks [2]. Darktrace’s research has shown an increase in novel social engineering attacks since the widespread adoption of ChatGPT [2]. These attacks mimic the sender’s identity [2], making them highly realistic and difficult to detect [2]. The evolving landscape of generative AI in email attacks has seen a shift in tactics [2], with attackers now impersonating IT teams instead of senior executives [2]. However, defensive AI can analyze normal communication patterns and determine the legitimacy of emails [2], providing a stronger defense against AI-powered threats [2].

Conclusion

The growing adoption of generative AI tools without sufficient oversight poses risks to organizations in terms of data leaks and inaccurate responses. To mitigate these risks, organizations need to prioritize robust data handling and processing frameworks [3], establish security policies [3], invest in monitoring technology, and provide training for employees. Additionally, external guidance and clear AI regulations are needed to ensure safe and productive usage of generative AI tools. In the context of cybersecurity, generative AI tools like ChatGPT have been used by attackers to enhance email phishing capabilities. However, defensive AI can be employed to analyze communication patterns and strengthen defenses against AI-powered threats. It is crucial for security teams to embrace AI as a solution rather than solely viewing it as a threat [2].

References

[1] https://www.secureworld.io/industry-news/ai-adoption-outpacing-security-measures
[2] https://www.techradar.com/pro/generative-ais-impact-on-phishing-attacks
[3] https://www.infosecurity-magazine.com/news/ai-surges-security-awareness-lags/