The US National Institute of Standards and Technology (NIST) has collaborated with various stakeholders to release a publication titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations.” This paper aims to identify vulnerabilities in artificial intelligence (AI) and machine learning (ML) systems, providing insights into potential attacks and offering approaches to mitigate them.


The publication breaks down adversarial machine learning (AML) attacks into two categories: attacks targeting ‘predictive AI’ systems and attacks targeting ‘generative AI’ systems [5]. It focuses on four major types of attacks: evasion [4], poisoning [1] [2] [4] [5], privacy [1] [2] [4] [5], and abuse attacks [1] [4] [5]. Evasion attacks manipulate the system’s response by altering inputs, while poisoning attacks introduce corrupted data during the training phase [4]. Privacy attacks aim to extract sensitive information about the AI system or its training data [1] [4], and abuse attacks involve inserting incorrect information into a legitimate source [1] [4].

The publication acknowledges the lack of foolproof defense against deliberate attempts to confuse or “poison” AI systems [3]. However, it also highlights the need for better defenses and provides some mitigation techniques. This aligns with NIST’s mission to support the development of trustworthy AI and will serve as a basis for their AI Risk Management Framework [5]. Additionally, the US AI Safety Institute has been established within NIST to develop standards for AI safety and security [5].


The NIST guidance document outlines four major cyber threats to AI systems: evasion [1], poisoning [1] [2] [4] [5], privacy [1] [2] [4] [5], and abuse attacks [1] [4] [5]. These attacks exploit vulnerabilities in AI systems by introducing untrustworthy data [1], potentially leading to dangerous situations [1]. The report emphasizes the importance of not underestimating AI security risks, as these attacks can be executed with minimal knowledge and adversarial capabilities [1]. Mitigating these threats is crucial for promoting trustworthy AI development. The NIST publication serves as a valuable resource for understanding and addressing these challenges, contributing to the ongoing efforts to enhance AI security and reliability.