A critical security vulnerability was recently discovered in the popular AI platform Hugging Face, impacting 50,000 organizations [2].


Wiz researchers found weaknesses in the Inference API, Inference Endpoints [1], and Spaces of Hugging Face, allowing attackers to access and manipulate customer data and models [1]. Exploiting this flaw could enable the upload of Pickle-based models capable of executing arbitrary code, compromising platform integrity. The vulnerability also posed risks of Shared Inference infrastructure takeover and Shared CI/CD takeover, potentially granting attackers escalated privileges and cross-tenant access to other customers’ models [3]. Hugging Face has taken steps to address these risks, attributing them to the decision to allow Pickle files despite known security risks [1].


The collaboration between Hugging Face and Wiz to mitigate these issues highlights the importance of running untrusted AI models in a sandboxed environment and implementing controls to reduce risks associated with AI. Moving forward, organizations must prioritize security measures to protect sensitive data and prevent unauthorized access.


[1] https://www.darkreading.com/cloud-security/critical-bugs-hugging-face-ai-platform-pickle
[2] https://www.scmagazine.com/news/private-ai-models-customer-data-at-risk-to-cross-tenant-attacks
[3] https://bestofai.com/article/wiz-discovers-flaws-in-genai-models-enabling-customer-data-theft