Back to Blogs Cybersecurity tips for businesses, enhancing security measures

NIST Releases Updated Adversarial Machine Learning Guidelines

March 28, 2025

Artificial intelligence systems are revolutionizing nearly every industry. As AI systems grow more sophisticated, adversaries are also evolving their tactics to exploit vulnerabilities. In January 2024, the National Institute of Standards and Technology (NIST) introduced voluntary guidelines aimed at identifying and mitigating attacks on AI systems. Now, with the finalized version of "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI.100-2e2025)," the conversation around AI security has reached a new milestone.

The Rise of Adversarial Attacks

Attacks on AI systems are not just a theoretical risk. They are a reality. System crackers can exploit vulnerabilities in AI systems to cause them to malfunction, manipulate outputs, or even compromise sensitive data. These attacks can range from subtle manipulations that degrade system performance to overt attempts to override security protocols. Given the rapid integration of AI across critical sectors, understanding and mitigating these risks is imperative for any company relying on intelligent systems.

Just like the ancient Greeks used a wooden horse to infiltrate Troy, cyber intruders can embed malicious code within AI models. This hidden code can be activated later to manipulate the AI's behavior, causing it to make incorrect decisions or leak sensitive information.

In 2018, researchers demonstrated how adversaries could subtly alter training data to poison an AI model. By injecting just a few corrupted data points, they significantly degraded the model's performance, highlighting the importance of data integrity in AI systems.

In 2020, a major financial institution discovered its AI-based fraud detection system was systematically bypassed by sophisticated hackers. This incident led to a comprehensive overhaul of their AI security protocols, emphasizing the critical need to continuously monitor and update AI defenses.

The Role of NIST Guidelines

In response to this increasing threat, NIST developed guidelines to help those who design, develop, deploy, evaluate, and govern AI systems. Initially released as voluntary guidance in January 2024, these guidelines provided a roadmap for identifying potential attack vectors and implementing robust mitigation strategies. The updated and finalized version, NIST AI.100-2e2025, builds on this foundation by incorporating the latest developments.

Critical Updates

One of the most significant revisions in the updated guidelines: the expanded section on generative AI (GenAI) attacks. As businesses increasingly integrate generative models into their operations, hackers have found innovative methods to exploit these systems. The updated guidelines now provide a more structured and comprehensive overview of:

What This Means for the Cybersecurity Community

The finalized NIST guidelines offer a framework to evaluate the security of AI systems during development and deployment. By following these best practices, developers can design more resilient systems. The detailed taxonomy and standardized terminology help ensure that any potential vulnerabilities are identified and addressed consistently.

The collaboration between industry and academia underpinning these guidelines highlights the importance of a multidisciplinary approach to AI security. For policymakers, these finalized guidelines represent a critical step toward establishing industry standards that balance innovation with robust security practices.

Embracing a Safer AI Future

The finalized NIST AI.100-2e2025 guidelines represent a major leap in protecting AI systems from adversarial threats. With expanded coverage of generative AI attacks and a new index that simplifies the identification and mitigation of risks, these guidelines are set to become a vital resource for the AI community.

To learn more, consult the finalized NIST guidelines.

Cybersecurity Risk Assessment

Do you know if your company is secure against cyber threats? Do you have the right security policies, tools, and practices in place to protect your data, reputation, and productivity? If you're not sure, it's time for a cybersecurity risk assessment (CSRA). STACK Cyber's CSRA will meticulously identify and evaluate vulnerabilities and risks within your IT environment. We'll assess your network, systems, applications, and devices, and provide you a detailed report and action plan to improve your security posture. Don't wait until it's too late.

Schedule a Consultation Learn More