Introduction
The integration of cybersecurity teams in the development of artificial intelligence (AI) policies within enterprises is often overlooked, despite its critical importance. Recent studies highlight a significant gap in the involvement of these teams, which poses potential risks as organizations increasingly rely on AI technologies.
Description
Cybersecurity teams are frequently excluded from discussions regarding the development of artificial intelligence (AI) policies within enterprises. Recent research indicates that a significant 45% of organizations do not involve these teams in the creation, onboarding [1] [2] [3] [4] [5] [6] [7], or implementation of AI solutions [1] [2] [3] [4] [5] [6] [7]. A survey of over 1,800 cybersecurity professionals revealed that only 35% reported active engagement in policy development. This lack of involvement is concerning, especially as organizations often prioritize innovation and customer experience over cybersecurity [7], focusing primarily on ethical and compliance issues [7]. The integration of cybersecurity into AI governance frameworks is deemed crucial [7], particularly given the increasing reliance on AI in security operations.
The 2024 State of Cybersecurity survey highlights that the primary applications of AI in security operations include automating threat detection and response (28%) [4] [5] [6], endpoint security (27%) [2] [4] [5] [6], automating routine security tasks (24%) [4] [5] [6], and fraud detection (13%) [5] [6]. While these technologies present significant opportunities, they also pose challenges, as they can be exploited in cyberattacks. Therefore, it is essential for cybersecurity teams to be involved in all stages of AI solution integration [1], including existing products that may later incorporate AI capabilities [1]. Jon Brandt [2] [5], ISACA Director of Professional Practices and Innovation [2] [5], emphasizes the importance of involving security teams in the development and implementation of AI solutions to address staffing challenges and the complex threat landscape [5].
Chris Dimitriadis [3] [7], Chief Global Strategy Officer at ISACA [7], has underscored the importance of establishing robust AI governance frameworks. He noted that while global efforts for AI regulation are increasing [3], cybersecurity considerations are often overlooked [3]. Organizations adopting generative AI policies should address critical questions regarding policy scope [5], definitions of acceptable behavior [1] [6], and compliance with legal requirements [1] [5] [6]. To assist professionals in navigating these developments [4], ISACA has released resources [4], including a white paper on the EU AI Act [4], which outlines compliance requirements for AI systems in the European Union [4], effective from August 2, 2026 [4]. Key recommendations from ISACA include instituting audits [4], adapting existing policies [4], and designating an AI lead [4].
Furthermore, ISACA has produced resources that address the implications of AI in authentication [4], particularly concerning deepfakes, which illustrate both the benefits and risks associated with AI-driven systems. Erik Prusch [3], CEO of ISACA [3], emphasized that the responsibility for AI governance now extends to all team members [3], as vulnerabilities are linked to both individual actions and organizational systems [3]. To support the evolving landscape of AI and cybersecurity [4], ISACA has expanded its educational offerings [4], including on-demand courses on machine learning and a forthcoming Certified Cybersecurity Operations Analyst certification [4], set to launch in Q1 2025 [4], focusing on essential skills for evaluating threats and vulnerabilities [4].
Conclusion
The exclusion of cybersecurity teams from AI policy development poses significant risks, as AI technologies become integral to security operations. To mitigate these risks, organizations must prioritize the integration of cybersecurity considerations into AI governance frameworks. This involves addressing regulatory compliance, ethical considerations, and potential vulnerabilities. As AI continues to evolve, the role of cybersecurity will be increasingly vital, necessitating ongoing education and adaptation to safeguard against emerging threats.
References
[1] https://www.businesswire.com/news/home/20241024323463/en/New-Study-Nearly-Half-of-Companies-Exclude-Cybersecurity-Teams-When-Developing-Onboarding-and-Implementing-AI-Solutions
[2] https://www.techradar.com/pro/cybersecurity-teams-are-being-left-out-of-creating-the-next-generation-of-ai-tools
[3] https://www.infosecurity-magazine.com/news/cybersecurity-teams-ignored-ai/
[4] https://finance.yahoo.com/news/study-nearly-half-companies-exclude-110000725.html
[5] https://markets.financialcontent.com/stocks/article/bizwire-2024-10-24-new-study-nearly-half-of-companies-exclude-cybersecurity-teams-when-developing-onboarding-and-implementing-ai-solutions
[6] https://www.innovationopenlab.com/news-biz/34517/new-study-nearly-half-of-companies-exclude-cybersecurity-teams-when-developing-onboarding-and-implementing-ai-solutions.html
[7] https://www.freevacy.com/news/infosecurity-magazine/cybersecurity-teams-missing-from-ai-governance-programmes/5854