Salt Security Inc and researchers at Salt Labs have identified critical security vulnerabilities in ChatGPT plugins that could potentially lead to unauthorized access to users’ accounts and services, such as sensitive repositories on platforms like GitHub [9].


These vulnerabilities were found in the plugin installation process [3] [4], PluginLab framework [1] [3] [4] [5] [7] [8], and susceptibility to OAuth redirection manipulation [5] [9]. Risks include unauthorized plugin installation, account takeovers [1] [2] [3] [4] [5] [6] [9], theft of user credentials through 0-click attacks, and potential access to third-party accounts and sensitive user data [5]. Attackers could exploit these flaws to gain control of accounts on third-party websites and access sensitive user data [3] [4].


Prompt resolution of these issues through coordinated disclosure practices with OpenAI and third-party vendors has prevented any exploitation in the wild. Security teams are advised to implement permission-based installation [1], two-factor authentication [1], user education on code caution [1], constant plugin activity monitoring [1], and subscribing to security advisories for updates to mitigate these risks. Collaboration with vendors is crucial to remediate these vulnerabilities and enhance security documentation. Organizations must prioritize security evaluations and employee training when implementing AI technologies to protect critical business assets and prevent account takeovers.