Generative AI applications have become increasingly popular among businesses, with a growing number of organizations utilizing multiple genAI apps. However, the use of these applications comes with inherent risks, particularly in terms of data security and privacy.
Description
Generative AI applications have experienced a surge in usage, with businesses now employing an average of nearly 10 genAI apps [2], a significant increase from previous years. Concerningly, over a third of the sensitive information shared with genAI applications consists of regulated data [2], posing a potential risk of data breaches for organizations. The sharing of proprietary source code within genAI apps remains a major issue, contributing to 46% of data policy violations [1] [2] [3] [4]. To combat these risks, enterprises are increasingly implementing data loss prevention controls [3], with a 75% rise in the adoption of DLP controls observed [3]. Additionally, three-quarters of businesses are now blocking genAI apps to prevent unauthorized data sharing. Real-time user coaching is also being utilized to educate users about potential security risks while interacting with genAI tools, resulting in a positive change in user behavior. Chief Information Security Officer [2] [4], James Robinson [2] [4], emphasizes the importance of robust risk management practices to safeguard data, reputation [2] [4], and ensure business continuity [5]. Popular genAI apps include ChatGPT and Microsoft Copilot [1], with 19% of organizations banning GitHub CoPilot [1]. Enterprises are implementing real-time user coaching to mitigate data risks [1] [2] [4], with 57% of users modifying their actions after receiving coaching alerts [1] [2] [4]. Netskope recommends that enterprises assess their AI usage [1], implement core security controls [1], plan for advanced controls such as threat modeling and continuous monitoring [5], and regularly evaluate security measures [1].
Conclusion
The widespread adoption of generative AI applications presents both opportunities and challenges for businesses. While these tools offer innovative solutions, they also pose significant risks to data security and privacy. By implementing robust risk management practices, such as data loss prevention controls and real-time user coaching, organizations can mitigate these risks and ensure the safe and responsible use of genAI apps. Looking ahead, it is crucial for enterprises to continuously evaluate and enhance their security measures to adapt to evolving threats in the AI landscape.
References
[1] https://workplaceinsight.net/generative-ai-is-scraping-vast-amounts-of-regulated-sensitive-data-from-organisations/
[2] https://vmblog.com/archive/2024/07/17/more-than-a-third-of-sensitive-business-information-entered-into-generative-ai-apps-is-regulated-personal-data-netskope-threat-labs.aspx
[3] https://www.infosecurity-magazine.com/news/sensitive-data-sharing-genai/
[4] https://www.intelligentciso.com/2024/07/17/netskope-threat-labs-reveals-more-than-a-third-of-sensitive-business-information-entered-into-generative-ai-apps-is-regulated-personal-data/
[5] https://www.lelezard.com/en/news-21454491.html