The cybersecurity world has long feared that automation would eventually sideline human analysts. However, a new report from McKinsey, "Agents, robots, and us: Skill partnerships in the age of AI," offers a more nuanced, and ultimately empowering, vision: one where the future of work is a "skill partnership" between people, intelligent agents, and robots.
For cybersecurity professionals—from the CISO suite to the SOC floor—this isn't a story of replacement but of radical redefinition.
The report, which analyzes the technical potential of AI to automate tasks, paints a picture of massive productivity gains achieved through human-machine collaboration.
1. Automation potential is significant but not absolute
McKinsey estimates that today's demonstrated technologies could, in theory, automate activities accounting for about 57% of current U.S. work hours. This reflects the profound shift in how work can be done, but it is not a forecast of job loss. The key takeaway is that adoption will focus on redesigning entire workflows, not just individual tasks, with humans still vital for guidance and decision-making.
2. Most human skills will endure but shift
The report finds that more than 70% of skills sought by employers today are used in both automatable and non-automatable work. This means most existing human skills remain relevant, but their application will evolve:
-
Digital and information-processing skills (e.g., routine data research, basic coding) are most exposed to automation.
-
Interpersonal skills (e.g., negotiation, coaching) are likely to change the least.
-
Workers will spend less time gathering data and more time framing questions and interpreting results.
3. AI fluency is the new essential skill
Demand for AI fluency—the ability to use and manage AI tools—has grown sevenfold in just two years. This surge across all industries signals that proficiency in collaborating with AI agents is becoming a prerequisite for many roles, demonstrating that the future workforce must be one that understands how to guide its intelligent partners.
4. Hybrid roles are the future
The analysis groups occupations into seven "work archetypes," with a diverse set falling into hybrid roles where humans, agents, and robots collaborate. For knowledge-based professions like engineering and finance—the category many security roles fall under—the future is "people-agent" work, where human effort is heavily enhanced by digital and AI tools.
The McKinsey findings directly challenge the current structure of cybersecurity teams and the mandate of the CISO.
"This is such an important shift in how we think about our roles in cybersecurity. This isn't about losing our jobs. It's about finally getting to do the parts that matter most," said Kip Boyle, vCISO, Cyber Risk Opportunities LLC. "The point about moving from 'analyst to interpreter/director' really hits home. Imagine if junior SOC analysts could focus more energy on understanding why certain patterns emerge instead of just finding them. That's where the real value lives."
Boyle continued, "After all, when Target was breached in 2013, it turned out their intrusion system saw the bad packets, but no one saw them until after the incident started. Having said that, many security teams I know are (rightfully) skeptical of letting AI make decisions. This will be a difficult transformation for them."
"The $2.9 trillion opportunity isn't about cost savings; it's about unleashing full human creativity by offloading mundane repetitive tasks to AI," said Dominic Vogel, President, Vogel Cyber Leadership & Coaching. "CISOs must stop thinking of AI as a tool and start treating it as a teammate. The mandate should be to re-architect security organizations around human–agent collaboration."
Vogel added, "The CISO role is shifting from deploying defenses to designing resilience. AI handles detection; humans ensure adaptability!"
Boyle agrees with Vogel. "The $2.9 trillion opportunity is huge, but I think the real win is giving security professionals their creativity back," Boyle said. "We got into this field to outsmart adversaries, not to be human log parsers. This framework gives us permission to be strategic again."
CISOs must move beyond simply deploying AI tools to re-architecting the Security Operations Center (SOC) and the broader security organization around the human-agent partnership.
-
Embrace the $2.9 trillion opportunity: The report estimates AI could generate $2.9 trillion in U.S. economic value by 2030 through productivity gains. The CISO's job is to capture this value by integrating AI agents into high-value security workflows, such as vulnerability triage, incident enrichment, and compliance reporting.
-
The rise of the AI governance function: With AI agents performing complex tasks autonomously, the CISO must establish strong AI governance. This includes ensuring model integrity, managing potential bias (e.g., in threat prioritization), and maintaining explainability and audit trails for regulatory compliance.
-
Shift from protection to resilience: When AI agents handle the bulk of threat detection, the CISO's strategic focus must shift toward cyber resilience. This means prioritizing investments in areas that AI cannot fully automate: human-centric incident response, deception technology, and security architecture that is inherently resilient to compromise.
"Humans have a hard time transitioning to innovations that bring exponential improvement. We are more comfortable with measured evolution. So many resisted the shift from horses to automobiles, letters to telephone, calculators, microwave ovens, the internet, and probably the best example, the spreadsheet," said Rick Doten, former VP of Information Security at Centene Corporation. "VisiCalc was released in 1979 as the first spreadsheet. I saw an interview with two guys who programmed it (for Apple II—which was killer app for that platform); they said they showed it to an accountant who started shaking because he said he spent most of his day redoing the math to add up the rows and columns. We take this for granted now—that we can just change a number in a block and everything re-calculates. Imagine having to do the math by hand. This didn't remove the need for accountants. It made them more effective, and saved their sanity."
The report's emphasis on skill shifts means security teams must adapt their hiring and development strategies. The roles that are most exposed to automation are the Tier 1 and Tier 2 analyst functions built around routine digital and information-processing tasks.
-
From analyst to interpreter/director: Security analysts will no longer be prized for their ability to manually sift through SIEM logs. Instead, their value will lie in their AI Fluency—the ability to prompt, guide, and interpret the output of advanced detection agents. They become directors of automated workflows.
-
Prioritizing un-automatable skills: The most valuable human skills will be those that complement the agent.
-
Contextual reasoning: Understanding geopolitical factors, business goals, and adversary intent—all things agents struggle with.
-
Creative problem solving: Devising novel defenses or complex threat-hunting strategies.
-
Communication and negotiation: Effectively communicating risk to the board and negotiating remediation with engineering teams.
-
-
The elevation of engineering: The report's findings increase the demand for Security Engineers and MLSecOps professionals who can build, train, secure, and manage the underlying AI models that the agents rely on. Their coding and technical skills are specialized and less exposed to the AI automation that targets routine coding.
The new cybersecurity professional is not competing with the machine; they are collaborating with an intelligent partner to finally tackle the scale and complexity of the modern threat landscape, freeing themselves from alert fatigue to focus on the high-level strategy and deep analysis that only a human mind can provide.
"The sevenfold surge in demand for AI fluency is the clearest signal we have that cybersecurity is shifting into a new gear," said Dr. Kimberly KJ Haywood, Chief AI Governance & Education Advisor, AI Connex; Adjunct Cybersecurity Professor, Collin College. "At this juncture, it's about building teams that know how to work with intelligent agents as real partners, rather than swapping humans for automation."
"The McKinsey data seems to illustrate that the skills we rely on today aren't disappearing. They are being elevated," Haywood continued. "Analysts who used to spend hours digging through logs will now guide AI agents that can do that work in seconds. That shift puts governance and security front and center. If developers do not build trust, transparency, and accountability into these human-agent workflows from day one, they will scale risk faster than scale capability! But the teams that learn to lead these systems (not just use them) will be the ones who define the next era of cyber defense. My concern is that we presently don't have enough 'human-AI agent' skills match for this level of required oversight. We are still on our learning curve."
Doten had plenty more to say on the topic:
-
"Right now, we are in the evolutionary stage, where some organizations thought AI and Agentic AI would displace staff; but it really just augments them. The new workforce must figure out their role in this relationship. Humans need to specialize, since we now have AI generalists who can do basic tasks of any category. We need to re-engineer our workflows and figure out how to accommodate AI support."
-
"Working with AI is like dealing with a genius 10-year-old, who knows everything but doesn't have wisdom of life experience, social understanding, ethical foundation, or executive function. So, the humans need to bring the judgment, guidance, direction... confirmation AI is doing things right—and appropriately. AI will always need humans to create novel content to train it, and to confirm that its outcomes are correct."
-
"So those who will be most successful are those who can develop the relationship with AI, manage and accommodate it, and share the workload to give AI those tasks it does best, and reserving others where humans are most effective."
-
"And yes, for the CISO, AI governance is the foundation. I've been on many panels talking on this topic, and we like to say AI 'acceptance' because we don't want to put in limits to stifle innovation. We want the business to embrace AI as quickly as possible, but as safely as possible. We want to leverage AI in areas that can improve business productivity, efficiencies, speed, or accuracy; while not doing anything that can have negative impact to the business, regulatory requirements, safety, or security."
Here's what some non-cybersecurity professionals are saying about the report's findings on LinkedIn.
Andy Martinus, Digital Director, HB:
"I've been digging into McKinsey Global Institute's latest work on AI, agents and automation, and it reinforces something I've felt for a long time:
AI isn't here to replace people.
It's here to upgrade the people who are willing to adapt.
What McKinsey's data really shows isn't a future of mass displacement; it's a future of recomposition.
Not 'jobs disappearing,' but work being rewired at the task level.
And when you look through that lens, the picture becomes clearer:
💡 A huge portion of work can be automated in theory, but far less in practice without humans in the loop.
💡 The skills that matter most—clarity, judgement, creativity, interpretation—become more valuable, not less.
💡 The biggest leaps in performance come from hybrid teams: humans + AI agents + automated workflows designed intentionally.
This aligns with what I see every day in digital.
Brand, content, experience... they don't get better by replacing humans.
They get better when humans focus on the high-impact thinking, and AI clears the noise.
AI can write a paragraph.
It can't build a point of view.
AI can process a page.
It can't understand a brand's truth.
AI can scale ideas.
It can’t originate the ones that actually stick in someone’s mind.
That's the opportunity:
Use AI to remove friction so humans can create work that's more memorable, more distinctive, more human.
The risk isn't AI.
The risk is organisations treating AI as a cost-cutting device instead of a capability multiplier.
If you lean in, AI becomes a companion... the smartest colleague you've ever had.
If you resist it, it becomes your competition.
I know which side I’m choosing."
Tushar Anand, Manager, Strategy and Operations Consulting, KPMG
"AI isn't taking your job, it is changing how you are doing it.
McKinsey's latest research highlighted that AI is automating over half of U.S. work hours—yet the real opportunity is lying in what AI is enabling humans to do, not replace.
Consultants, strategists, and leaders—this is your moment.
The skills that machines aren't replicating—complex judgment, creativity, emotional intelligence—are becoming your greatest differentiators.
Those who are integrating AI into their workflows and committing to continuous learning aren't just surviving but thriving.
For organizations, the imperative is continuing: redesigning roles, reskilling teams, and building cultures that are viewing AI as a partnership, not a threat.
So here's the takeaway: Stop fearing AI. Start mastering it.
Use it to amplify your impact, deliver sharper strategies, and forge stronger client relationships. The future is belonging to those who are evolving faster than technology.
How are you embracing AI to future-proof your career and business?
#AI #FutureOfWork #Consulting #Leadership #Strategy
Disclaimer: The views expressed here are my own and do not necessarily reflect those of my employer or affiliated organizations. This post is for informational purposes and not intended as professional advice."

