Organizations are currently facing challenges in balancing innovation and security in generative AI projects [3].


A joint study from IBM and Amazon Web Services has revealed that while 82% of C-suite executives recognize the importance of trustworthy and secure AI [2], only 24% are considering security in their generative AI projects [2]. This lack of focus on security has led to concerns about unpredictable risks and new vulnerabilities, such as model extraction and data poisoning [1], impacting AI initiatives [2] [3]. The study emphasizes the critical need to secure AI from the start due to its integration into business processes and the value of the data and insights it generates [2].

A lack of understanding and uncertainty about where to invest in generative AI security is hindering organizations from taking proactive measures [3]. Recommendations include establishing a new security governance model, threat modeling [1], secure training data workflows [1], and monitoring for unexpected behaviors [1]. IBM X-Force researchers anticipate increased targeting of AI systems as the technology matures [2], with current threats including phishing email scripts and deepfake audio [2]. By setting governance guardrails [3], organizations can develop a strategy to secure the AI pipeline [3], including data [1] [3], models [1] [3], and infrastructure [3]. Recognizing the threat landscape facing AI pipelines is crucial [3], as emergent threats like shadow AI use offensive capabilities as attack vectors and seek to compromise AI assets and services. Actions suggested to mitigate risks include establishing policies and controls for third-party applications [1].


The lack of consideration for security in generative AI projects poses significant risks to organizations. By implementing recommended security measures and staying vigilant against emerging threats, organizations can safeguard their AI initiatives and ensure the integrity of their data and models.