A recent report by Menlo Security highlights the growing risk of data loss and exposure caused by generative AI. This report focuses on the types of data most commonly exposed and the increasing attempts to upload files to generative AI websites.


According to the report, personally identifiable information (PII) is the most frequently exposed type of data [2], followed by confidential documents [2] [8]. In the past thirty days [5] [8], over half of the Data Loss Prevention events detected by Menlo Security involved attempts to input PII [5]. The report also reveals an 80% increase in attempted file uploads to generative AI websites [2] [4] [7] [8], which is attributed to the addition of file upload features on AI platforms [2] [8]. While copy and paste attempts have decreased [2] [4], file uploads pose a significant risk [2] [4].

Organizations are implementing security policies for generative AI sites [1] [2] [4] [7] [8], but mostly on an application-by-application basis [1] [2] [4] [7] [8], which may result in gaps in safeguards [2] [7]. The report emphasizes the need for comprehensive [2] [3] [6] [7], group-level security policies to address the evolving cybersecurity risks associated with generative AI usage [2]. It also highlights the importance of scalable and efficient monitoring of employee behavior, as well as ongoing training and education in cybersecurity to keep pace with AI developments [3].


This report underscores the increasing impact of generative AI on data security, particularly in relation to the exposure of PII and confidential documents. To mitigate these risks [7], organizations should adopt comprehensive security policies at the group level, rather than relying on application-specific measures. Additionally, monitoring employee behavior and providing ongoing cybersecurity training are crucial for staying ahead of AI advancements and protecting sensitive information. Looking ahead, it is clear that the use of generative AI will continue to present new challenges, making it imperative for organizations to remain vigilant and proactive in their cybersecurity efforts.


[1] https://www.securitymagazine.com/articles/100400-55-of-generative-ai-inputs-comprised-personally-identifiable-data
[2] https://markets.financialcontent.com/stocks/article/bizwire-2024-2-14-menlo-security-reports-that-55-of-generative-ai-inputs-contained-sensitive-and-personally-identifiable-information
[3] https://www.infosecurity-magazine.com/news/pii-input-sparks-alarm-dlp-events/
[4] https://aithority.com/technology/menlo-security-55-percent-of-generative-ai-inputs-include-sensitive-information/
[5] https://finance.yahoo.com/news/menlo-security-reports-55-generative-130000748.html
[6] https://securityboulevard.com/2024/02/55-of-generative-ai-inputs-include-sensitive-data-menlo-security/
[7] https://siliconangle.com/2024/02/14/new-report-finds-sensitive-information-risk-55-generative-ai-inputs/
[8] https://betanews.com/2024/02/14/over-half-of-gen-ai-inputs-contain-pii-and-sensitive-data/