The 2025 Cybersecurity Information Sheet (CSI) on AI and Data Security offers critical guidance for organizations navigating the intersection of artificial intelligence and cybersecurity.
The U.S. National Security Agency (NSA), in coordination with the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and cybersecurity agencies from Australia, New Zealand, and the United Kingdom, released the guidance—AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems—on May 22, 2025.
It provides recommendations to address a growing national and international priority: ensuring that the data used to develop, operate, and maintain AI models remains secure, trustworthy, and resistant to tampering. The CSI builds on the NSA's joint guidance on Deploying AI Systems Securely issued in April issued in April 2024 in coordination with CISA, the FBI, and multiple international partners.
Produced through collaboration among U.S. government agencies, the document underscores both the promise and peril of AI technologies when integrated into enterprise systems.
The sheet warns that AI systems—including machine learning models and inference engines—create new avenues for exploitation. As the CSI puts it: "AI systems may expose unique attack surfaces that adversaries can target, including model weights, training data, and APIs that serve AI functions."
The document highlights risks such as data poisoning, model inversion, and membership inference attacks. These could allow adversaries to manipulate AI outputs, steal sensitive training data, and/or reverse-engineer model logic.
The CSI cautions, "AI supply chains are complex, often incorporating third-party libraries, pretrained models, and cloud services that may contain hidden vulnerabilities."
Organizations must scrutinize the origins and security of AI components as rigorously as any other critical software.
The CSI offers actionable steps for AI-integrated environments:
Data hygiene: Validate, sanitize, and monitor training data sources.
Access controls: Apply least privilege and strong authentication to AI model repositories and APIs.
Monitoring: Continuously assess AI systems for unexpected behaviors or performance drift.
Incident response: Update response playbooks to include AI-specific threats like model extraction or poisoning.
AI is being rapidly embedded into SOC tools, threat detection, fraud prevention, and business automation. The CSI notes, "Without adequate security measures, AI-enabled systems can become high-value targets and unintentional amplifiers of cyber risk."
For cybersecurity teams, this means AI security is no longer theoretical—it's a frontline concern requiring dedicated controls, testing, and cross-functional oversight.
The CSI aligns three key risk areas—data supply chain risks, maliciously modified (poisoned) data, and data drift—to each of the major stages in the lifecycle of AI systems, as identified in the AI Risk Management Framework published by the National Institute of Standards and Technology (NIST).