AI Agents in the SOC: Why Secure by Default Is Not Enough in the Cloud
7:43
author photo
By Advait Patel
Thu | Jan 29, 2026 | 2:48 PM PST

Most cloud programs can point to a stack of green checkmarks: CIS Benchmarks passed, vendor "secure by default" settings enabled, CSPM dashboards mostly in the yellow instead of red. On paper, the environment looks clean.

Yet real-world breaches tell a different story. Recent industry reports keep landing on the same theme: cloud attacks are rarely about exotic zero-days and far more about everyday configuration and identity mistakes.

The Cloud Security Alliance's 2024 study found that organizations experiencing cloud-related breaches overwhelmingly blamed insecure identities and misconfigurations as the primary cause. IBM's 2024 cloud threat work notes that 40% of data breaches now involve data spread across multiple environments, where carefully crafted security plans fail under the complexity of hybrid and multi-cloud. Other analyses estimate that roughly a quarter of cloud incidents stem from misconfigurations alone.

So, if benchmarks are passing but attackers are still walking in, what's going on?

What 'secure by default' really gives you

Cloud providers today ship with much saner defaults than a decade ago. New storage buckets are often private, basic logging is available with a few clicks, and managed services come pre-wrapped with encryption, IAM integration, and guardrails.

Benchmarks and best-practice baselines (like CIS Benchmarks) encode this progress into long lists of checks: enable versioning, lock down ports, enforce MFA, restrict public access, and so on. These are invaluable for catching obvious foot-guns and keeping teams aligned across large environments.

But "secure by default" is still generic by design. It is tuned for the average customer, not for your specific data flows, threat model, or business dependencies. And benchmarks are snapshots: they verify whether a control is present, not whether it stands up to an actual campaign by an attacker who is patient, adaptive, and very good at chaining small weaknesses.
That's where the gaps show up.

Gap 1: Benchmarks check controls; attackers follow identities and paths

A benchmark might confirm that logging is enabled and storage is encrypted, but an attacker doesn’t care about the checkbox. They care about: who can access what, from where, using which identity, and how far can that identity go?

IBM's 2024 X-Force Threat Intelligence Index highlighted a sharp rise in the use of stolen credentials and valid accounts, which accounted for about 30% of incidents they investigated and grew more than 70% year over year. CSA's data shows that almost every organization hit by a cloud breach pointed to identity issues as the root cause. At the same time, misconfigurations and over-permissioned roles remain widespread. Palo Alto Networks' Unit 42 cloud threat research calls out misconfigurations, weak credentials, and missing authentication as everyday issues that threat actors routinely exploit in the cloud.

Benchmarks tend to look at each resource in isolation. Attackers look at the path: an over-permissioned CI/CD role here, a misconfigured storage bucket there, and a forgotten admin user in a third account. No single control violation looks catastrophic, but the chain absolutely is.

Gap 2: Baseline checks don't keep up with production change

A team can work hard for months to meet a baseline, then celebrate when their CSPM finally shows "mostly compliant." By the next quarter, half the picture has changed.

New services are spun up under pressure. Exceptions are granted to ship a feature. A short-lived debug rule lingers far beyond the incident that justified it. And because modern environments can generate five times as many cloud alerts by the end of the year as they did at the start, signal gets buried.

Static benchmarks don't really see any of this. They're excellent at telling you "this storage bucket is public" or "this security group is too broad" right now. They're far less effective at telling you, "this configuration drifted last week" or "your change-control process is slowly eroding the intent behind your baseline."

The result is familiar: environments that used to be aligned with the benchmark but are no longer aligned with the original risk assumption.

Gap 3: Benchmarks focus on services; attacks focus on blast radius

Most baselines answer the question, "Is this service configured safely?" That's useful but incomplete. What they rarely answer is:

  •  If this identity is compromised, what is the maximum damage it can do?

  • How quickly would we notice and contain that damage?

  • Which combinations of misconfigurations give an attacker a reliable route to sensitive data or critical operations?

Recent guides from cloud-security platforms stress that modern defense needs cross-cutting visibility across vulnerabilities, identities, network exposure, and data sensitivity at the same time—not just per-service hygiene.

A service can be "secure" in isolation and still be part of an insecure system once you connect it to everything else.

Using benchmarks without being fooled by them

None of this means baselines are worthless. They're still one of the fastest ways to raise the floor and avoid common, repeatable mistakes. The point is to treat them as starting conditions, not proof that you're safe.

Three practical shifts help close the gap between "secure by default" and secure in production:

1. Align checks with real attack paths, not just rules

Instead of asking "Do we meet CIS X.Y.Z?," start with "What are the three most likely routes to a serious incident in our environment?" For many organizations, that includes:

  • Compromised cloud identities with broad access

  • Misconfigured storage or data services

  •  Exposed CI/CD or management APIs

Then map your benchmark controls to those attack paths. Where they don't line up, add custom checks or policy-as-code that reflect your risk, not just the generic checklist.

2. Treat drift detection as seriously as initial setup

If misconfiguration and change-control problems are topping risk lists, then continuous monitoring, not one-time hardening, should get the investment.

That means:

  • Tracking who changed which control and why

  • Alerting on deviations from known-good patterns (not just absolute rules)

  • Regularly reviewing exceptions and "temporary" rules to see if they can be removed

In practice, this looks like combining CSPM, IaC scanning, and identity analytics into one view of risk rather than running them as separate projects.

3. Close the loop with incidents and exercises

Finally, let real incidents and simulations shape your controls. When you respond to an attack or near-miss, ask:

  • Which baseline control failed, or was missing?

  • Which parts of the environment behaved as expected, and which didn’t?

  • What can we change so that the next attacker has to work much harder?

Run tabletop exercises or red-team scenarios specifically designed to walk past "secure by default" settings. Use those lessons to adjust your guardrails and to update how you interpret benchmark scores.

Benchmarks and secure defaults keep you from making the obvious mistakes. Attackers are not aiming at those. They are aiming at the gaps between identity, configuration, and change over time.

Teams that use baselines as a foundation—and then layer on attack-path thinking, drift monitoring, and real-world feedback—are the ones that make it genuinely hard for cloud intrusions to succeed, even when the checklist says "compliant."

Tags: Cloud Security, AI, SOC,
Comments