This Forbes article, "7 Remote Jobs That Will Survive The Next Tech Layoffs," caught my eye, namely because the first job listed as being safe from being replaced by AI is Cybersecurity Analyst. Makes sense. You can't replace the human element.
The rapid acceleration of Generative AI (GenAI) has fundamentally altered the calculus of job security across the technology sector. While many white collar roles face automation pressure, the Forbes article (you get a few free articles before having to subscribe) highlighted that cybersecurity roles are among the most resilient. This validation is critical for professionals, but it demands a closer look: Are cybersecurity roles truly layoff-proof, or are they simply evolving faster?
The article, published last month, listed several professions expected to weather the economic and AI-driven turbulence. The inclusion of two cybersecurity-adjacent roles—Cybersecurity Analyst and AI Policy, Governance, and Ethics roles—underscores a growing consensus that the unique challenges in this field require continuous human oversight and strategic judgment.
The Forbes list pointed specifically to two areas of security that will remain indispensable.
No. 1: Cybersecurity Analyst (or similar hands-on roles)
The core job of the analyst—alert triage, investigation, and threat hunting—is heavily supported by AI, but not replaced by it.
-
The Forbes argument: These roles are deemed secure because the threat landscape is widening and becoming more sophisticated, driven by AI itself. You can't automate the adversary, and thus you can't automate the defense completely.
-
The CISO reality: AI is essential for managing the volume of threats (alert fatigue). However, it struggles with the context and intentionality of attacks. A human analyst is required to identify novel attack techniques (zero-days), determine the business impact of a breach, and exercise the final human judgment necessary for critical containment actions. As one security expert noted, "AI still has a significant propensity to make mistakes, which in the security world is quite problematic. So you're always going to need a human check."
No. 7: AI Policy, Governance, and Ethics roles (the GRC pivot)
This category focuses on the controls and risk translation necessary to deploy AI responsibly.
-
The Forbes argument: This is an emerging, high-demand field where human expertise is necessary to create guardrails around an unregulated technology.
-
The CISO reality: This is the most critical area for the future of the profession. As AI agents gain autonomy, traditional Governance, Risk, and Compliance (GRC) roles are evolving into AI Governance Coordinators and Prompt Security Engineers. These professionals are not just checking boxes; they are translating evolving international regulations and ethical concerns into technical requirements that secure the AI supply chain (model security, data provenance, and adversarial testing). The need for human trust in AI systems ensures these oversight roles are safe from automation.
The other, non-InfoSec jobs Forbes listed as safe include: Healthcare IT Specialist at 2, followed by 3-6 of Accounting and Financial Analyst; Technical Writer and Documentation Specialist; Insurance Agent and Risk Management Consultant; and Customer Success Manager (Essential Services). Again, Cybersecurity Analyst was 1, and Software Developer (AI, Cybersecurity, and Core Infrastructure) was 7.
The Forbes list is not an outlier. Major reports from industry think tanks and analysts consistently agree that while the definition of security jobs is changing, the demand for skilled professionals remains exceptionally high.
The McKinsey perspective: skill partnerships, not replacement
A recent SecureWorld News article about McKinsey Global Institute's report, "Agents, robots, and us: Skill partnerships in the age of AI," offers a nuanced view that strongly supports the cybersecurity field's resilience.
-
Automation of tasks, not roles: McKinsey maintains that work will become a "skill partnership" between people and intelligent agents. In cybersecurity, this means AI automates Tier-1 SOC automation (alert triage, initial investigation, and log correlation), which is a key driver behind the potential "disappearance" of certain entry-level analyst tasks.
-
The upside: By handling repetitive, high-volume work, AI improves the quality of life for analysts, allowing them to shift from reactive monitoring to high-value activities like proactive threat hunting, security engineering, and architecture design. The roles aren't eliminated; they are elevated.
-
New roles emerge: These reports emphasize the emergence of roles focused on the very AI tools themselves, such as AI SOC Workflow Operator and Adversarial Testing Junior, which require human skill and context.
The ISC2 Report: AI as an Opportunity
The 2025 ISC2 Cybersecurity Workforce Study reflects the sentiment of practitioners:
-
Less of a threat: ISC2 data suggest that cybersecurity professionals view AI "as less of a threat to the workforce than anticipated."
-
Skills shift: The majority of respondents believe AI will create the need for more specialized cybersecurity skills and a more strategic mindset.
The study, which surveyed more than 16,000 cybersecurity professionals worldwide, marks a critical pivot in the industry narrative. The core finding is that the most pressing issue for organizations is no longer just the sheer number of open jobs (headcount) but the profound shortage of specialized skills (the skills gap).
The 2025 ISC2 study fundamentally shifts the industry's focus from a quantity problem to a quality problem. While the global workforce gap remains a concern, the report shows that a deficiency in high-level, specialized skills is having a far more immediate and negative impact on security posture.
1. The critical skills gap overtakes headcount
For the first time, the ISC2 study emphasized skills needs over a global workforce gap estimate, reflecting the feedback from the professionals themselves.
The consensus is clear: cybersecurity jobs are not shrinking, they are shifting up the value chain. The roles most at risk are those focused purely on repetitive, transactional tasks (like basic log review or compliance documentation). Conversely, the most secure and fastest-growing roles are those that require uniquely human skills: critical thinking, strategic governance, ethical judgment, adversarial creativity (red teaming), and translating complex technical risk to business outcomes.
To survive and thrive in the AI reckoning, cybersecurity professionals must focus on becoming the human layer of oversight, strategy, and engineering that the machines cannot replicate.
Here are some metrics, the 2025 findings, and their implications:
-
Critical/significant skills shortage: 59% of respondents cited this (a 15% increase from 2024); teams have the hands but not the brains needed for modern threats.
-
Consequences of skills gap: 88% experienced at least one significant cybersecurity consequence; real-world failures like process oversight, system misconfigurations, and failure to adopt new security technologies are a direct result of skills deficits.
-
Top reason for staffing issues: 30% reported an inability to find staff with the critical skills needed, while 29% cited a lack of budget to hire them; the talent exists, but the right kind of talent is scarce, and hiring managers can't afford the specialized skills they need.
2. The new cybersecurity curriculum: the AI effect
Artificial intelligence has not been universally perceived as a job killer. Instead, the workforce views it as a catalyst for a necessary evolution in skill sets. AI and Cloud Security now dominate the list of most-needed skills.
Here are the Top 6 critical skills in demand in 2025 versus the percentage of respondents citing the need for those skills:
-
AI-Machine Learning: 41%
-
Cloud Security: 36%
-
Risk Assessment/Management: 29%
-
Application Security: 28%
-
Governance, Risk, and Compliance (GRC): 27%
-
Security Engineering: 27%
Key AI-specific findings from the report:
-
Positive outlook: Cybersecurity professionals are generally positive about AI, viewing it as an opportunity for career advancement.
-
The specialization mandate: 73% believe AI will create the need for more specialized cybersecurity skills.
-
The strategic shift: 72% believe AI will require a more strategic cybersecurity mindset.
-
Automation of entry-level: There is a general consensus that AI will significantly reduce the need for entry-level staff in routine, transactional roles (e.g., Tier-1 SOC alert triage)—pushing junior professionals to upskill into more complex, strategic areas sooner.
3. Economic and workplace strain: the burnout factor
While the economic climate showed signs of stabilizing compared to the cuts of the previous year (reports of layoffs and budget cuts leveled off), the existing workforce is under significant pressure due to the skills gap.
-
Workload and exhaustion: Almost half of all professionals reported feeling exhausted (48%) from trying to stay current on threats and overwhelmed (47%) by the sheer workload.
-
Retention risk: This strain translates into a retention problem. While 75% are likely to stay with their current employer for the next year, that number drops to 66% when looking at the next two years. This "two-year drop" highlights a risk of losing key talent once the job market improves or specialized skills open doors elsewhere.
-
What drives engagement: The most cited factor for engagement and retention is advancement opportunities (31%), emphasizing that career growth and continuous learning are critical to keeping skilled professionals from leaving.
The consensus from the Forbes article and McKinsey and ISC2 reports is clear: cybersecurity jobs are not shrinking, they are shifting up the value chain. The roles most at risk are those focused purely on repetitive, transactional tasks (like basic log review or compliance documentation).
Conversely, the most secure and fastest-growing roles are those that require uniquely human skills: critical thinking, strategic governance, ethical judgment, adversarial creativity (red teaming), and translating complex technical risk to business outcomes.

