The US National Institute of Standards and Technology (NIST) has recently released a comprehensive report titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations.” This report addresses the security and privacy challenges associated with the increased use of artificial intelligence (AI) systems.

Description

The 106-page document categorizes cyberattacks against AI systems and provides ways to mitigate these threats [1]. It highlights the risks and vulnerabilities in the development and deployment of AI systems, particularly for predictive and generative AI systems and machine learning operations that rely on large amounts of data [4]. The report explains that adversarial machine learning is a technique used by attackers to deceive AI systems through subtle manipulations [5]. It categorizes these attacks based on the attackers’ goals [5], capabilities [5], and knowledge of the target AI system [5].

According to the report, there are four types of cyberattacks that target AI systems: poisoning [2], abuse [2] [3], privacy breaches [2] [6] [7], and evasion [2]. These attacks pose increasing risks as malicious actors find ways to bypass security measures [2]. The report emphasizes the lack of strong assurances in current defenses and the need for better solutions. One of the main challenges lies in unlearning malicious behavior within AI models [2]. The report also mentions that adversarial machine learning attacks can be conducted with little knowledge of models or training data, and generative AI poses unique abuse risks [3]. Additionally, attackers can remotely poison data sources [3], and there is no foolproof method for protecting AI from attackers [3].

NIST acknowledges the privacy risks associated with AI systems, including membership inference attacks [5]. The agency emphasizes the need for robust mitigation measures and urges the tech community to develop better defenses [6] [7]. However, the report also recognizes the lack of effective solutions and robust defenses to fully protect against attacks and data leaks. Experts in the field recognize the challenges in defending against adversarial attacks on AI systems and stress the importance of robust protection before widespread deployment [5]. NIST calls for the cybersecurity community to develop better defenses to address these challenges and secure AI tools and systems.

Conclusion

The report highlights the urgent need to address the security and privacy challenges posed by adversarial machine learning attacks on AI systems. It emphasizes the risks and vulnerabilities associated with the development and deployment of AI systems, particularly in predictive and generative AI and machine learning operations [4]. The lack of strong assurances in current defenses and the difficulty in unlearning malicious behavior within AI models are major challenges. The report also recognizes the unique abuse risks posed by generative AI and the potential for remote poisoning of data sources. It calls for the development of better defenses to protect AI tools and systems and urges the tech community to prioritize robust mitigation measures. The future implications of these challenges are significant, and it is crucial to address them before widespread deployment of AI systems.

References

[1] https://www.tenable.com/blog/cybersecurity-snapshot-nist-unpacks-cyberattacks-against-ai-systems-as-fbi-strikes
[2] https://multiplatform.ai/nist-highlights-ongoing-challenges-in-securing-ai-systems-from-cyber-threats/
[3] https://www.scmagazine.com/news/4-key-takeaways-from-nists-new-guide-on-ai-cyber-threats
[4] https://securityboulevard.com/2024/01/nist-better-defenses-are-needed-for-ai-systems/
[5] https://venturebeat.com/security/new-nist-report-sounds-the-alarm-on-growing-threat-of-ai-attacks/
[6] https://thehackernews.com/2024/01/nist-warns-of-security-and-privacy.html
[7] https://www.redpacketsecurity.com/nist-warns-of-security-and-privacy-risks-from-rapid-ai-system-deployment/