A critical prompt injection vulnerability [4] [6], identified as CVE-2024-5184 [3], has been disclosed in EmailGPT [1] [4] [7], an AI email assistant service that utilizes OpenAI’s GPT models to help users compose emails within Gmail.

Description

This vulnerability, with a CVSS Base Score of 6.5 [2] [4], was discovered by researchers at the Synopsys Cybersecurity Research Center (CyRC) and security researcher Mohammed Alshehri. Malicious actors can exploit this flaw to inject harmful prompts, potentially leading to data exposure [3] [6] [7], financial loss [1] [6] [7], and denial-of-service attacks [1] [6]. The vulnerability allows attackers to take control of the AI service by sending malicious messages [7], potentially leading to the disclosure of confidential information or unauthorized command execution [1] [7]. The main branch of the EmailGPT software is affected [1] [7], posing significant risks such as intellectual property theft [1] [7], denial of service attacks [6] [7], and financial losses from unauthorized API requests [7]. Despite CyRC’s efforts to contact the developers within the 90-day disclosure period, the vulnerability remains unaddressed, prompting them to recommend immediate removal of EmailGPT applications to mitigate potential risks [7]. Prompt injection attacks are a growing concern as AI adoption increases [4], underscoring the importance of prompt handling protections when integrating Large Language Models (LLMs) [4]. Eric Schwake [1] [7], Director of Cybersecurity Strategy at Salt Security [1] [7], stresses the need for auditing all installed applications, especially those utilizing AI services and language models [1] [7], to assess their security measures [7]. Patrick Harr [1] [6] [7], CEO at SlashNext [1] [7], emphasizes the necessity of robust governance and security practices in AI model development [1], urging customers and businesses to demand evidence of suppliers’ security measures and data access protocols before integrating AI models into their operations [1]. Staying informed about updates and patches is crucial for secure service use in the face of evolving AI threats [3]. CyRC recommends immediately removing the application to prevent exploitation [4] [5].

Conclusion

The prompt injection vulnerability in EmailGPT highlights the potential risks associated with AI email assistant services. Mitigating these risks requires immediate action, including removing vulnerable applications and implementing robust security measures. As AI adoption continues to grow, ensuring the security of AI models and services is essential to protect against evolving threats.

References

[1] https://www.infosecurity-magazine.com/news/emailgpt-exposed-prompt-injection/
[2] https://kbi.media/press-release/cyrc-vulnerability-advisory-cve-2024-5184s-prompt-injection-in-emailgpt-service/
[3] https://cybermaterial.com/emailgpt-vulnerability-exposes-data/
[4] https://securityboulevard.com/2024/06/prompt-injection-vulnerability-in-emailgpt-discovered/
[5] https://devopsforum.uk/topic/69107-prompt-injection-vulnerability-in-emailgpt-discovered/
[6] https://www.scmagazine.com/brief/significant-compromise-likely-with-new-emailgpt-vulnerability
[7] https://islainformatica.com/emailgpt-expuesto-a-ataques-de-inyeccion-inmediata/