The Rise of the Agentic Enterprise: Navigating the Latest Cyber Risk
9:23
author photo
By Cam Sivesind
Thu | Mar 26, 2026 | 3:10 PM PDT

The conversation around AI is shifting from "chatbots" to "agents." According to the recent McKinsey & Company analysis, "Securing the agentic enterprise: Opportunities for cybersecurity providers," cybersecurity is entering an era where AI doesn't just suggest actions, it executes them autonomously.

For security professionals, the shift reported in the article represents a fundamental change in the attack surface. CISOs and their teams are no longer just securing human users; they are securing a "chaotic web" of autonomous entities. 

 "What we're seeing isn't just an expansion of endpoints—it's an expansion of decision-makers," said Matt Pour, Director of Solution Engineering at Island. "Every agent introduces its own logic path, and security teams now have to account for behavior, not just access."

The "Agentic Enterprise" is defined by AI agents that can browse the web, access internal APIs, and make independent decisions to achieve a goal. While this unlocks unprecedented productivity, it introduces three "double-edged" risks.

  • The expanded identity perimeter: Every autonomous agent is essentially a non-human identity. If an agent has the authority to move data or change configurations, it becomes a high-value target for "Agent Hijacking" or prompt injection.

  • The "black box" execution risk: Unlike traditional automation with fixed logic, agentic AI can be unpredictable. An agent might find a "creative" way to solve a problem that inadvertently violates compliance or security policies.

  • Weaponized autonomy: Attackers are using the same agentic frameworks to conduct automated reconnaissance and multi-channel social engineering at a scale no human-led SOC can match.

"The real risk isn't just that agents can act, it's that they can act in ways we didn't explicitly design," Pour said. "That gap between intention and execution is where governance has to step in, because that's where most of the new attack surface lives."

For solution and service providers, the "Agentic Era" is a massive market opportunity to move beyond simple tool resale and into AI Governance and Assurance.

Providers must evolve from managing SIEM alerts to orchestrating "Agentic Guardrails." This includes deploying real-time monitoring that can detect when an AI agent is deviating from its intended behavioral profile.

"Guardrails can't be static policies anymore," Pour added. "They need to operate at runtime, adapting to what an agent is trying to do in context—and in high-risk scenarios, that includes building in human approvals to ensure autonomy doesn’t outpace accountability."

There is a growing vacuum for startups to build tools specifically for LLM Security and Model Poisoning defense. Vendors that can offer "Secure-by-Design" agent frameworks will win the trust of risk-averse enterprises. Call it the rise of agentic security platforms.

MSPs have an opportunity to offer "AI Stress Testing" as a service—using autonomous red-teaming agents to constantly probe an enterprise's defenses for AI-driven misconfigurations.

The McKinsey report suggests that the "arm's length" relationship between enterprises and their security partners is no longer sustainable.

Just as the cloud created a shared responsibility model, the agentic enterprise requires a shared behavioral model. Enterprises must define the "intent," while vendors provide the technical "guardrails" to ensure that intent is executed safely.

Security teams will demand "Explainable AI" from their vendors. If a security platform uses an autonomous agent to remediate a threat, the enterprise needs to know exactly why that decision was made to maintain regulatory compliance.

The relationship will become more iterative. Enterprise security teams will need to work closer than ever with vendors to "fine tune" defensive agents against the specific business logic of their organization.

What's next? The roadmap to agentic resilience

The McKinsey analysis makes it clear: the perimeter is no longer just invisible—it is active. To prepare, cybersecurity leaders should:

  1. Inventory non-human identities: Start treating every AI agent with the same level of governance as a privileged human user.

  2. Establish "agentic guardrails": Implement runtime controls that can "kill-switch" an agent if it attempts to access unauthorized data or execute high-risk commands.

  3. Update the mental OS: Move from a mindset of "preventing access" to "governing autonomy."

We asked some additional experts from cybersecurity vendors for their thoughts on securing the new chaotic web.

Matthew Hartman, Chief Strategy Officer at Merlin Group, said:

  • "Agentic AI and emerging technologies will change the tools defenders use, but the most valuable skills remain broadly human ones—curiosity, problem-solving, and the initiative to investigate anomalies and adapt quickly. Organizations across all industries are increasingly looking for workers who can combine strong technical fundamentals with deep AI-curiosity. Defenders who demonstrate the ability to think critically about how technology evolutions change risk and defense will be successful."

 Amit Zimerman, Co-Founder & Chief Product Officer at Oasis Security, said:

  • "Human oversight remains vital when using AI in offensive cybersecurity. While AI is highly efficient in automating and scaling tasks, human expertise is necessary to interpret complex results, make critical decisions, and apply context-specific reasoning. Humans are essential for ensuring that AI-driven tools are used responsibly and for validating the results of AI processes, especially when it comes to the nuances of certain vulnerabilities or threat landscapes."

  • "AI also plays a significant role in 'shift-left' approaches by identifying security vulnerabilities earlier in the software development lifecycle. When integrated into offensive security measures, AI can detect and address issues before they make it into production, reducing the cost of remediation and improving the overall security posture of an organization."

  • "Agentic AI security is still a rapidly evolving space. Enterprise readiness is ultimately proven in practice, not just at launch."

Diana Kelley, CISO at Noma Security, said:

  • "AI agents introduce a new dimension of supply chain risk because they're not just libraries or packages being pulled into the software development lifecycle by DevOps teams. They're software systems that use LLM outputs to determine next steps and execute actions across connected tools with the user’s delegated permissions. And they're being adopted by everyone from curious CEOs to highly-motivated new hires."

  • "Traditional supply chain controls were built for static artifacts: signed code, scanned dependencies, and trusted repositories. When you review and scan code before deployment, you can generally understand its intended behavior, even if you can’t predict every possible outcome. Agents are different. Their behavior can be assembled dynamically at runtime, with LLM-generated outputs influencing what steps they take next."

  • "An AI agent uses an LLM to read text and decide what to do next. The LLM generates the response, and the agent turns that response into actions using connected tools. So, if someone hides harmful instructions inside a document or tool, the LLM may interpret those instructions as something to follow, and the agent may act on them. The document isn't code, but it can still influence what the software does."

  • "That level of dynamic behavior and connectivity can create a fast-moving path from an untrusted external component to real internal impact."

Randolph Barr, CISO at Cequence Security, said: 

  • "We're seeing AI rapidly evolve from simple automation to deeply personalized, context-aware assistance—and it's heading toward an Agentic AI future where tasks are orchestrated across domains with minimal human input."

  • "Before we even get to AI-specific risks, we have to get the fundamentals right. In the haste to bring AI to market quickly, engineering and product teams often cut corners to meet aggressive launch timelines. When that happens, basic security controls get skipped, and those shortcuts make their way into production. So, while organizations are absolutely starting to think about model protections, prompt injection, data leakage, and anomaly detection, those efforts mean little if you haven't locked down identity, access, and configuration at a foundational level."

  • "Security needs to be part of the development lifecycle from the beginning."

Kamal Shah, CEO at Prophet Security, said:

  • "AI improves the quality and clarity of vulnerability reporting by the hacking community. Researchers are using AI to draft clear guidance based on their findings, while documenting impact for multiple audiences within an organization. Some hackers have built AI agents to capture and annotate screenshots and network requests automatically, providing the necessary evidence that enterprises need to validate their findings. For organizations, this means receiving standardized, professional reports that are easier to reproduce and fix, effectively reducing the expensive back-and-forth typical of manual triage."

Comments