Researchers at Protect AI and independent security experts have identified critical vulnerabilities in the infrastructure used by AI models on the Huntr Bug Bounty Platform. These vulnerabilities pose risks of unauthorized access [3], information theft [3], and model poisoning [3].
Description
The vulnerabilities discovered include three high-severity and two medium-severity bugs in platforms such as Ray, MLflow [1] [3], ModelDB [1] [3], and H20 version 3 [1]. While some vulnerabilities have been fixed [3], others remain unpatched [1] [3], potentially leading to server takeover and unauthorized access to AI models [3]. Protect AI aims to demystify practical attacks against AI/ML infrastructure and raise awareness of vulnerable components in the AI/ML ecosystem [2]. Intellectual property theft is also a concern [3], as cybercriminals seek to exploit these vulnerabilities for financial gain [3]. Protect AI has disclosed the vulnerabilities [3], notified software maintainers and vendors [1], and assigned each vulnerability a CVE identifier. Workarounds for the unpatched issues have also been recommended. The bug bounty program [3], Huntr [2] [3], has played a role in soliciting vulnerability submissions [3].
Conclusion
As AI technologies continue to be widely adopted, it is crucial for businesses to prioritize the security of AI tools and infrastructure [3]. Proactive identification and addressing of vulnerabilities are essential to mitigate risks. The November Vulnerability Report provides additional information on vulnerabilities that have been identified.
References
[1] https://www.darkreading.com/vulnerabilities-threats/unpatched-critical-vulnerabilities-ai-models-takeover
[2] https://github.com/protectai/ai-exploits
[3] https://cybermaterial.com/ai-infrastructure-vulnerabilities-revealed/