A critical vulnerability in Microsoft 365 Copilot was discovered by cybersecurity researchers, involving ASCII smuggling to hide data within clickable hyperlinks using special Unicode characters [5], compromising user data security.
Description
Security researcher Johann Rehberger highlighted the intricacies of ASCII smuggling [1] [3], which uses hidden Unicode characters to exploit input mechanisms and extract data without detection [3]. This technique allowed attackers to steal multi-factor authentication codes and other sensitive information by injecting prompts through malicious content, prompting Copilot to search for additional emails and documents without user consent [6]. The flaw also facilitated data exfiltration to a server controlled by the attacker. Researchers demonstrated how sales numbers and multi-factor authentication codes could be exfiltrated and decoded [4], prompting recommendations to prevent further exploitation. Microsoft was informed of the vulnerability in January 2024 and released a patch by July 2024. This incident underscores the risks associated with AI tools like Microsoft 365 Copilot and emphasizes the importance of implementing robust security measures to protect against prompt injection and related attacks.
Proof-of-concept attacks have demonstrated vulnerabilities in Microsoft’s Copilot system [5], allowing cybercriminals to manipulate responses [5], exfiltrate data [1] [3] [4] [5] [7], and bypass security protections [2] [5]. Techniques like augmented generation poisonings and indirect prompt injections could lead to remote code execution attacks [5]. An attacker with code execution capabilities could use Microsoft 365 Copilot to serve users phishing pages [5]. One innovative attack technique [5], LOL Copilot [1] [5], turns AI into a spear-phishing machine by mimicking compromised users’ email styles [5]. Publicly exposed Copilot bots without authentication protections could be exploited by cybercriminals to extract sensitive information [1] [5]. Organizations are advised to assess their risk tolerance and enable security checks to prevent data leaks from Copilot [5]. Enterprises are advised to assess their risk tolerance and exposure to prevent data leaks from Copilot [5], implementing data loss prevention (DLP) and other security controls to manage the creation and publication of these tools [6].
Conclusion
The vulnerability in Microsoft 365 Copilot underscores the importance of robust security measures to protect against prompt injection and related attacks [6]. Organizations and enterprises are advised to assess their risk tolerance and implement security controls to prevent data leaks from Copilot, mitigating the risks associated with AI tools like Copilot.
References
[1] https://thehackernews.com/2024/08/microsoft-fixes-ascii-smuggling-flaw.html
[2] https://cybermaterial.com/microsoft-fixes-copilot-ascii-smuggling-flaw/
[3] https://www.krofeksecurity.com/microsoft-fixes-ascii-smuggling-flaw-prevent-data-theft-in-microsoft-365-copilot/
[4] https://www.techradar.com/pro/security/microsoft-copilot-could-have-been-hacked-by-some-very-low-tech-methods
[5] https://pledgetimes.com/microsoft-365-copilot-data-theft-vulnerability-fixed/
[6] https://www.infosecurity-magazine.com/news/microsoft-365-copilot-flaw-exposes/
[7] https://www.bitdefender.com/blog/hotforsecurity/microsoft-patches-ascii-smuggling-vulnerability-in-recent-security-update/