NSA, CISA Guidance Demands a Secure-by-Design Approach for AI in OT
6:55
author photo
By Cam Sivesind
Thu | Dec 4, 2025 | 5:28 AM PST

The integration of Artificial Intelligence (AI) into Operational Technology (OT) environments promises unprecedented efficiency, but it also introduces critical, often safety-related, risks that cybersecurity professionals can no longer ignore.

Recognizing this seismic shift, the U.S. National Security Agency (NSA) is joining forces with the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the Australian Signals Directorate's Australian Cyber Security Centre (ASD's ACSC), and a coalition of international partners to release a pivotal guide: the Cybersecurity Information Sheet (CSI), "Principles for the Secure Integration of Artificial Intelligence in Operational Technology."

This joint effort—which includes the FBI, the Canadian Centre for Cyber Security, the German Federal Office for Information Security (BSI), and the National Cyber Security Centres of the Netherlands, New Zealand, and the UK—is one of the first major documents to treat AI-in-OT as a distinct risk domain, signaling a formal and urgent call to action for every OT security professional.

The guidance is structured around four key principles that critical infrastructure owners and operators must follow to balance the benefits of AI with the risks to system safety, security, and reliability.

These principles demand a fundamental understanding of how AI systems operate and fail. Professionals must grasp the unique risks posed by machine learning, large language models (LLMs), and AI agents, and how these differ from traditional industrial control systems (ICS). This includes understanding the potential for AI models to drift or be manipulated and the severe impacts this could have on physical processes and human safety.

Simply put: Only integrate AI when the clear benefits outweigh the unique risks. This principle requires a rigorous, case-by-case assessment of the business need for AI in an OT environment. It also focuses heavily on managing OT data security and defining the role of vendors in the AI supply chain.

Organizations must create clear governance structures that cover the entire AI lifecycle. This means establishing a dedicated risk register for AI components in industrial settings and instituting frameworks for rigorous testing, continuous monitoring, and assurance.

This is the implementation principle. It calls for adopting a secure-by-design approach, ensuring that security practices—such as access controls, encryption, and logging—are foundational to the AI system itself, and that models are continually monitored for anomalies and integrity.

This joint CSI is not just theoretical; it provides a new playbook that mandates immediate action for professionals working in critical infrastructure and OT-related businesses leveraging AI.

The greatest implication is the necessity to treat AI in OT as a separate architectural track. OT security teams must:

  • Establish a dedicated risk register: Maintain a governance framework and risk register that specifically addresses AI system components, separate from traditional IT or even standard ICS security controls.

  • Design for architectural separation: Adhere to the recommendation to keep AI processing off the plant floor whenever possible. The preferred architecture is to send sanitized OT data outbound to separate, secured AI systems, rather than embedding opaque models directly into safety-critical loops.

AI risks are fundamentally data risks. The guidance focuses on three key threats to data integrity across the AI lifecycle:

  • Data supply chain risks: Relying on untrusted third-party data or models that can compromise accuracy or introduce legal/regulatory exposures.

  • Maliciously modified ("poisoned") data: Intentional manipulation of training data to cause unsafe AI behavior, performance degradation, or to help attackers bypass AI-driven safeguards.

  • Data drift: The natural or sudden shift in input data properties over time, which silently degrades model accuracy and reliability.

Your immediate priority is data provenance: Professionals must implement immutable logging, cryptographic signing, and continuous auditing to track data lineage from source to model output, verifying integrity at every stage.

Given that AI in OT impacts physical processes and human safety, the CSI reinforces safety-critical design principles.

  • Mandate human oversight: Integrate a human-in-the-loop for critical decisions that affect the physical environment. AI should augment, not autonomously control, safety-critical actions.

  • Engineer fail-safe behavior: Implement robust guardrails and manual override mechanisms that limit the potential consequences of AI failures and ensure that a malfunctioning or compromised model cannot unilaterally make or persist dangerous control decisions.

The guidance serves as a strong endorsement for leveraging existing, purpose-built AI security frameworks. OT professionals are encouraged to use standards like the NIST AI Risk Management Framework (AI RMF), OWASP Top 10 for Large Language Model (LLM) applications, and MITRE ATLAS as a common language for governance, testing, and red-teaming their AI deployments.

We asked some SMEs from cybersecurity vendors for their thoughts:

Hugh Carroll, VP of Corporate & Government Affairs at Fortinet:

  • "Leading global cybersecurity agencies, including US’s CISA, UK's NCSC, and Canada’s CCCS, have released much-needed guidance outlining Principles for the Secure Integration of Artificial Intelligence in Operations Technologies (OT)."
  • "Fortinet is honored to have had the opportunity to contribute to this important effort as we collectively work to best safeguard OT environments from today and tomorrow’s threats."

Marcus Fowler, CEO of Darktrace Federal:

  • "These new principles offer timely and practical guidance to safeguard resilience and security as AI becomes central to modern OT environments. It’s encouraging to see a strong focus on behavioral analytics, anomaly detection, and the establishment of safe operating bounds that can identify AI drift, model changes, or emerging security risks before they impact operations. This shift from static thresholds to behavior-based oversight is essential for defending cyber-physical systems where even small deviations can carry significant risk."
  • "The guidance also encourages caution around LLM-first approaches for making safety decisions in OT environments, based on unpredictability and limited explainability, creating unacceptable risk when human safety and operational continuity are on the line. It's important to use the right AI for the right job."
  • "Taken together, these principles reflect a maturing understanding that AI in OT must be paired with continuous monitoring, and transparent and distinct identity controls. We welcome this guidance and remain committed to helping operators put these safeguards into practice to strengthen resilience across critical infrastructure. We continue to see growing recognition of AI’s operational value in cybersecurity, as seen in recent NDAA provisions from bipartisan members of the House Armed Services Committee that emphasize AI-driven anomaly detection, securing operational technology, and incorporating AI into cybersecurity training - a proactive step toward strengthening U.S. cyber readiness."

April Lenhard, Principal Product Manager, Cyber Threat Intelligence at Qualys:

  • "The new joint guidance canonizes the fact that when critical infrastructure is involved and lives are at stake, AI must be incorporated as an extra set of eyes and not as an unsupervised pair of hands. This shows our global posture with emerging technologies has correctly transitioned from “trust but verify” to “verify, then also verify again in new ways”. The emphasis on secure development, educating personnel on AI risks and limitations, and enumerating data challenges reflect a great launch point for further exploration. The NSA, CISA, FBI, and our allied partners are all signaling that the benefits of adding AI into operational technology must clearly outweigh the risks for us to safely and securely integrate the future.
  • James Maude, Field CTO at BeyondTrust:

    "Securing remote access remains one of the top prior"ities for many organizations especially in high risk, OT and ICS environments which need to be kept well away from the public internet. Organizations need to think about how to securely manage privileged access into their critical environments. Ensuring that employees, vendors, and 3rd parties have just the access and permissions needed to do their job without additional risk exposure. This can be combined with real time monitoring and controls to audit and terminate access in the event of identity compromise. Relying on VPNs or Remote Desktop alone is not enough and risks introducing additional attack vectors."
  • "Beyond remote access, an important defence is to reduce standing privileges in the environment so that in the event an identity is compromised the ‘blast radius’ is limited. This is especially important in the age of identity attacks and hybrid environments where one compromised identity can open up paths to privileged access on dozens of systems on-prem and in the cloud that organizations weren’t aware of."
  • "Understanding and reducing your identity attack surface should be at to forefront of every organization thinking when it comes to cyber defense moving forward.'

RELATED:

  • Operational Technology (OT) is the beating heart of critical infrastructure—power grids, manufacturing plants, oil refineries, and water systems. But according to Dragos's newly-released "2025 OT Security Financial Risk Report," produced with independent analysis from Marsh McLennan, OT remains a massive "billion-dollar blind spot" in cyber risk modeling.

  • For cybersecurity professionals safeguarding the intersection of digital and industrial systems, Fortinet's newly-released "2025 State of Operational Technology and Cybersecurity Report" offers a rare blend of optimism and realism. Based on a global survey of more than 550 OT professionals, the findings reveal both a maturing OT security landscape and the persistent threats it continues to face.

  • The UK's National Cyber Security Centre (NCSC), in collaboration with international partners including U.S. CISA and the Australian Cyber Security Centre (ACSC), has issued powerful new guidance demanding that OT organizations create and maintain a "Definitive Architecture View" (DAV). This isn't simply another documentation exercise; it's a foundational mandate acknowledging that in complex, highly-interconnected OT environments, what you can't see, you cannot defend.

  • New research from Rockwell Automation, "The State of Smart Manufacturing Report: Cybersecurity Finding," reveals a fundamental shift: OT security is no longer a niche issue but a core business priority, and manufacturers are rapidly adopting new strategies to meet this challenge.

Comments