It started on Friday, February 20th, with a limited research preview. Anthropic, the AI safety company behind the Claude family of models, announced Claude Code Security, a tool that scans codebases for vulnerabilities and suggests targeted software patches for human review. The announcement was measured. The market reaction was not.
By Friday's close, CrowdStrike had shed more than 10% of its value. Zscaler, Okta, and Fortinet followed with similar losses. By Monday, the damage across the sector deepened: CrowdStrike was down more than 17% over the two-day stretch, Zscaler and Okta each off roughly 15%. JFrog cratered 25%. GitLab dropped 8% in a single session. The selling was broad, indiscriminate, and driven by a simple—if oversimplified—fear: what if AI can just do this now?
It's a fear that has become familiar across the software sector. Salesforce is down more than 33% year-to-date. ServiceNow has shed 34%. The market has been wrestling with an existential question about SaaS valuations in an AI-native world for months. The Anthropic announcement landed like a lit match in a room full of anxiety.
CrowdStrike's CEO turns the tables—on Claude
George Kurtz, CrowdStrike's co-founder and CEO, wasn't content to simply defend his company in a press release. He decided to make the point with a touch of theater—by asking Claude itself.
"There's been a lot of noise lately about Claude replacing security products," Kurtz wrote in a LinkedIn post over the weekend. "I figured, why not go straight to the source and ask Claude directly?" He then posed the prompt to the AI: build me a tool to replace CrowdStrike.
Claude's response was unequivocal. "I appreciate the ambition, George, but I have to be straightforward: building a replacement for CrowdStrike isn't something I can do here, and it wouldn't be responsible for me to suggest otherwise."
"So there you have it—straight from Claude," Kurtz wrote. "AI is powerful. It's transformative. And it absolutely makes security better. But AI doesn't eliminate the need for security. It increases it."
In a separate post, Kurtz was more direct about the core distinction the market appears to be missing: "AI innovation is inspiring. But let's stay grounded in reality: an AI capability that scans code does not replace the Falcon platform—or your security program. Security requires an independent, battle-tested platform built to stop breaches."
It was a sharp reframe, and one that resonated widely. Palo Alto Networks CEO Nikesh Arora echoed the sentiment during an earnings call last week, telling analysts he was "confused" why the market viewed AI as a threat to cybersecurity, given that customers are actively asking for more AI to scale their security operations.
What the experts are actually saying
To understand whether this sell-off reflects a real structural shift or a panic-driven overreaction, SecureWorld reached out to cybersecurity practitioners and executives at the front lines of this debate. Their perspective: the market is confusing compression with collapse.
"The market reaction assumes AI collapses the value of cybersecurity platforms," said David Brumley, Chief AI and Science Officer at Bugcrowd. "In reality, it compresses certain features while expanding the overall surface area of security work. As attackers use AI to scale, defenders must do the same."
Brumley, who leads AI and science strategy at Bugcrowd, the San Francisco-based crowdsourced cybersecurity leader, draws a careful distinction between what AI changes and what it doesn't. The work of security isn't disappearing—it's being reorganized.
"The real shift is in how the work gets done," Brumley said. "Security professionals are knowledge workers, and like every knowledge profession, our workflows are being reshaped by AI. Those who ignore it will fall behind. Those who adopt it will become dramatically more effective. While security professionals are used to learning new skills, what makes this more scary is the speed and scale that the change is coming."
To illustrate his point, Brumley reaches back nearly a decade to a moment that feels eerily similar to today's headlines.
"I want to step back and draw a parallel with Radiology in medicine," he said. "When AI began outperforming humans on certain radiology benchmarks around 2016, there were loud predictions that radiology was a dying field. Pundits even said 'stop training radiologists.' Instead, diagnostic radiology residency programs are now at record levels. The profession didn't disappear—it evolved. Radiologists use AI to increase accuracy, reduce false negatives, and focus on complex judgment calls where human context matters most."
"Cybersecurity will follow the same path," Brumley concluded. "Skills will shift, old problems will be solved, and new problems will arise. Translating that into risk decisions, prioritization, remediation strategy, and real-world tradeoffs still requires experienced practitioners. The companies and professionals who integrate AI effectively will outperform, not be replaced."
Ram Varadarajan, CEO of Acalvio Technologies—a Santa Clara-based leader in cyber deception technology—offers a more blunt assessment of what's actually happening.
"Fundamentally, AI changes the cybersecurity problem," Varadarajan said. "It doesn't eliminate it. In fact, the more AI gets deployed, the more—not less—cybersecurity and AI safety we need."
"If you're a cybersecurity provider wedded to doing things in the way they were done prior to AI, then you're going to have problems," he said. "If, on the other hand, you evolve apace with AI, your cybersecurity product demand will be evergreen. AI brings new risk vectors, and as it diffuses throughout business and society, the need for cybersecurity that stays ahead of those risks will grow."
Varadarajan points to the emerging threat landscape as the clearest argument for why cybersecurity demand grows, not shrinks, in an AI-saturated world. "What does this mean product-wise for cybersecurity vendors? AI-native defense that can meet AI-native attacks bot-on-bot, with speed, subtlety and precision."
CrowdStrike's own threat intelligence, published this week, underscores the point. The company's 2026 Global Threat Report found that AI-enabled cyberattacks surged 89% over the last year, with average attacker "breakout times"—the window between initial compromise and lateral movement—falling to just 29 minutes, a 65% acceleration from 2024. Some attacks, the report noted, unfolded in seconds.
John Bambenek, President of Bambenek Consulting, raises a dimension of the debate that often goes unaddressed: the fundamental nature of AI's capabilities and where they break down.
"AI is ultimately a backward-looking tool—it learns from history," Bambenek said. "Cybersecurity is fueled by researchers who are looking at how threat actors are evolving, what new techniques and vulnerabilities are being exploited, and how the tools are changing. While Anthropic and others may be part of the engine that powers future solutions, it will still need to be powered by researchers who are finding the 'new' threats."
It's a point that speaks directly to what vulnerability management platforms, threat intelligence firms, and managed detection services provide that a code-scanning AI cannot: novelty. The adversaries security professionals face aren't running yesterday's playbook, and a system trained on historical patterns has an inherent lag that threat researchers close.
So what is actually at risk?
The nuanced answer, according to Wall Street analysts and practitioners alike, is that not all cybersecurity is created equal when it comes to AI disruption risk.
Analysts at UBS noted last week that while code scanning and certain SIEM-adjacent analytics functions could see compression from AI tools, the core platform businesses—endpoint detection and response, identity management, SASE networking, and cloud security posture management—require the kind of proprietary data and real-time infrastructure that AI chat models don't replicate. CrowdStrike's Falcon platform, for instance, draws on telemetry from hundreds of millions of endpoints processed in real time. Claude was built on publicly available code patterns and disclosed CVEs.
Wedbush analyst Dan Ives, who closely covers the sector, pushed back hard on the bear case, arguing that AI is a tailwind for cybersecurity spending, not a headwind. As hackers harness AI to launch faster, more personalized attacks at scale, enterprise security budgets are being pressured upward, with some vendors raising sales targets by as much as 30% this year.
The bottom line
The cybersecurity sector is undeniably being reshaped by AI. That part of the market's thesis is correct. But the leap from "reshaped" to "replaced" is where analysis gives way to anxiety.
What Kurtz, Brumley, Varadarajan, Bambenek, and a growing chorus of practitioners argue—from different angles and with different vocabularies—is the same fundamental point: AI expands the attack surface faster than it shrinks the defense budget. The companies that will struggle are those that fail to integrate AI into their own platforms. The companies that will thrive are those that make AI-powered defense their core competency.
For investors watching the carnage this week, the harder question isn't whether AI will change cybersecurity; it's whether the companies they're selling have the platform depth, proprietary data, and organizational will to lead that change—or follow it.
Follow SecureWorld News for more stories related to cybersecurity.

