"AI is coming for my job" is a common refrain from many tech workers today. We've all heard that the entry level jobs are going to be performed by AI and that most low-skill jobs in technology will be either completely performed by AI or at least augmented enough to reduce the amount of team members required to perform the tasks.
Employment for software developers aged 22–25 dropped by nearly 20% between late 2022 and mid-2025 in roles most exposed to AI, according to the report, Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence.
Generative AI tools can now write boilerplate code, generate documentation, and triage support tickets. Additionally, companies are publicly stating that they are looking to leverage AI to meet growth demands versus hiring junior resources.
While this can make current college students, and those transitioning into cyber, sit up in their seats, there are hopeful signs. The same Canaries in the Coal Mine report points out that AI augmented roles (not replaced by AI but strengthened by it) actually grew for newcomers to the space. Many organizations are discovering that they need entirely new skill sets to protect AI systems, leverage AI for security, and manage the increasing risks that artificial intelligence introduces to their operations.
This isn't just about adding AI tools to existing security workflows. It's about fundamentally new job categories that didn't exist even just a few months ago.
There is no shortage of examples of the difficulties entry-level folks have with getting a job into cybersecurity. With shrinking budgets, changing organizations, and a competitive job market, organizations are being far more selective on who they hire rather than simply filling roles. They are looking for skills (quality) over quantity. Add to this the pressure that AI is putting on many of the entry-level and lower-skill tech jobs (ones that are typically filled by college and transitional candidates). Jobs like analyst roles in the SOC are likely to see the biggest shift away from human to AI in the coming months and years.
However, combining human and AI power presents an opportunity for supercharging security programs. While some roles will diminish or go away completely, there will be new roles created by the shift to an AI powered workforce, and roles requiring AI skills will continue to rise. At a minimum, skills such as prompt engineering, AI literacy, and critical thinking regarding AI output will be required for many roles.
81% of hiring managers now consider AI-related skills a hiring priority, according to Resume Genius.
Additionally, new roles are being, and will continue to be, created to take on the challenges regarding the risks, ethics, and governance with AI usage. Roles like AI/ML Security Software Engineer and AI Security Architect are prevalent on hiring sites with job descriptions that look like this.
Role Overview
We're hiring a Senior AI Security Engineer to design and implement security solutions for LLM model scanning and governance. The role sits at the intersection of application security, AI governance, and machine learning frameworks, helping define enterprise standards for safe AI adoption.
Core Responsibilities
• Design and solve security challenges related to LLMs within our application ecosystem
• Research emerging threats (e.g., prompt injection, model poisoning) and propose enterprise-level controls
• Maintain services within SLA standards for reliability and operational excellence
Required Skills & Experience
• 5+ years in security engineering
• Experience building and deploying GenAI products
• Familiarity with AppSec tools (SAST, DAST, OWASP Top 10) and DevSecOps practices
• Strong programming skills (Python, Rust, Go) and ML frameworks (PyTorch, TensorFlow, HuggingFace)
• Knowledge of cloud infrastructure
• Understanding of AI governance, privacy, and adversarial ML threats
Over the coming months and years, we're likely to see other roles open in the cybersecurity space that focus on using and securing AI. Here are a few that we're likely to see.
1. Defensive AI security
This role serves as frontline defenders protecting AI systems from attacks including adversarial examples, data poisoning, and model extraction attempts. Some of the skills needed in these roles are understanding different AI threats, security risks in AI models and LLMs, OWASP Top 10 LLM attacks, and the like. Essential technical skills include deep understanding of machine learning algorithms, adversarial ML, and AI system architecture, strong programming skills in Python, with experience in frameworks like TensorFlow, or PyTorch, and proven experience with security tools, threat intelligence platforms, and cloud security. In this role, you must also possess expertise in detecting data poisoning attempts, which seek to introduce corrupt training datasets that will ultimately affect model accuracy and work collaboratively with other groups in the organization to ensure that AI applications are secure against vulnerabilities and attacks while utilizing frameworks like MITRE ATLAS for mapping AI-specific attack patterns and defenses.
2. AI-powered security operations
AI-Enhanced SOC Analysts upends traditional security operations, where analysts leverage artificial intelligence to enhance their threat detection and incident response capabilities. These positions work with the existing analyst platforms that are capable of autonomous reasoning that mimics expert analyst workflows, correlating evidence, reconstructing timelines, and prioritizing real threats at a much faster rate. Analysts will require expertise in natural language processing for extracting insights from threat intelligence, automated investigation workflows that gather evidence without analyst intervention, and predictive risk scoring that prioritizes alerts based on potential impact. While these roles are a modification to the existing SOC roles, the demand will increase as organizations begin to see value of wider breadth of coverage and the reduction of false positives.
3. AI risk and governance
AI Risk Analysts and Governance Specialists ensure responsible AI deployment through risk assessments and adherence to compliance frameworks. Professionals in this role may hold a certification like the AIGP. This certification demonstrates that the holder can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems. This role requires foundational knowledge of AI systems and their use cases, the impacts of AI, and comprehension of responsible AI principles. Additionally, the role would require the ability to demonstrate an understanding of how current and emerging laws apply to AI systems, and a comprehension of the AI life cycle, the context in which AI risks are managed, and the implementation of responsible AI governance. But it also goes beyond these core skills, professionals would need to possess strong critical thinking skills to analyze and address the novel challenges posed by AI, including the ability to think creatively, assess risks, and propose innovative solutions to ethical issues. Much of this is being pushed on existing and traditional governance roles, but look for this to evolve over time.
4. AI forensics and investigation
AI Forensics Specialists represent an emerging role that combines traditional digital forensics with AI-specific environments and technology. This role is designed to analyze model behavior, trace adversarial attacks, and provide expert testimony in legal proceedings involving AI systems. While classic digital forensics focuses on post-incident investigations, preserving evidence and chain of custody, and reconstructing timelines, AI forensics specialists must additionally possess knowledge of machine learning algorithms and frameworks. In this role, professionals will collect and analyze forensic data and provide assurance for every step of the AI lifecycle. This process starts with data acquisition to ensure data is clean, relevant, and sourced ethically, through model development where building and training processes must strictly adhere to requirements and are well-documented.
5. AI security research and development
AI Security Researchers work on innovating the defensive technologies and attack methodologies, while advancing the fundamental understanding of AI security challenges through academic research and industry collaboration. In this role, the researcher pushes progress in AI security research with novel work and technical developments, exploring new avenues for attack, designing new ways to avoid those attacks, and advancing AI security. Their responsibilities include AI security threats and countermeasures studies, creating new AI techniques to secure the algorithms, publishing research results in academic journals and conferences, working with industry partners and academia, and staying up-to-date with new AI security insights or updates. These professionals require knowledge of machine learning algorithms and frameworks, proficiency in programming, a strong grasp of cybersecurity principles and tools, plus understanding of specific sector that the AI is applied in.
Financial Services
Financial institutions are building massive defenses around AI systems that handle large amounts of money daily. Companies like JPMorgan Chase are investing big in AI. But it's more than just innovation, it's about survival in an environment where algorithmic trading decisions happen in microseconds and where a single AI model failure could trigger market-wide chaos. These financial institutions know that traditional cybersecurity approaches will likely fall short when protecting AI systems that make lending decisions, detect fraud, or execute trades. And they're not getting much help from regulations yet, as they need to prove their AI systems aren't discriminating against protected classes while simultaneously ensuring these same systems can detect sophisticated financial crimes. There will be a continuing need to fill roles for professionals who understand both machine learning vulnerabilities and financial regulations, and are capable of auditing algorithmic trading systems for both security flaws and market manipulation risks.
Healthcare
Healthcare organizations are caught between the promise of AI-driven diagnostics and the the high risk of patient data breaches or patient harm caused by adversarial attacks. When an AI system recommends cancer treatment or interprets medical imaging, the stakes couldn't be higher, but these same systems are vulnerable to attacks that could cause a misdiagnosis by altering just a few pixels in a medical scan. Professionals in this space must understand both HIPAA privacy requirements and adversarial machine learning attacks. Hospitals are hiring specialists who can ensure their AI clinical decision support systems won't be fooled by malicious inputs while maintaining the strict audit trails required for medical liability. The challenge is particularly acute because healthcare AI systems process the most sensitive personal data while operating in life-or-death scenarios where security failures aren't just compliance violations, they are potential patient safety issues.
Government and Defense
The national security implications of AI have government agencies looking to build expertise in areas that barely existed five years ago. Defense contractors are hiring specialists to ensure AI weapons systems can't be tricked into targeting friendly forces, while civilian agencies need experts who can protect critical infrastructure AI systems from foreign adversaries. The challenge here has massive consequences for nations and citizens. When AI systems control power grids, transportation networks, or military assets, adversarial attacks become acts of warfare rather than mere cybercrime. Security clearance requirements create additional barriers, but they also create opportunity for professionals who can navigate both the technical complexities of AI security and the bureaucratic realities of government work.
Technology Companies
Tech companies are facing a perfect storm of AI security challenges while simultaneously trying to scale AI systems to serve customers. Large language model providers are discovering that securing conversational AI systems requires entirely new approaches where traditional penetration testing isn’t enough. AI-native companies are building security teams from scratch, often poaching talent from traditional cybersecurity firms and offering compensation packages that reflect the scarcity of relevant expertise. But, those same companies developing AI tools to automate cybersecurity are struggling to secure their own AI systems against increasingly sophisticated attacks. These organizations often offer the highest salaries and most cutting-edge projects, but they also demand expertise in areas so new that there aren't established career paths or training programs—forcing professionals to essentially invent their roles as they go.
The reality is that most AI security roles don't require you to become an expert overnight, but they do demand a hybrid skill set that traditional cybersecurity training doesn't cover. If you're already in cybersecurity, start by getting comfortable with Python (if you’re not already). And I don't just mean scripting, but understanding how it's used in machine learning workflows with frameworks like TensorFlow and PyTorch. You don't need to build neural networks from scratch, but you absolutely need to understand how they work well enough to spot when something's wrong. For SOC analysts looking to transition into AI-powered security operations, focus on natural language processing basics and get hands-on experience with behavioral analytics tools. The MITRE ATLAS framework is becoming as essential for AI security as the original MITRE ATT&CK framework is for traditional cybersecurity, so make that a priority learning target. Most importantly, just like traditional cybersecurity, start thinking like an attacker. Understand how adversarial examples work, what data poisoning looks like, and why traditional security tools miss these types of threats.
For those drawn to the governance side, look at the AIGP certification. However, don't stop there, as you'll need to understand both the technical implications of AI systems and the regulatory landscape that's evolving faster than most compliance frameworks can keep up. I know, shocking. Critical thinking skills aren't just nice-to-have in this space, they're essential. You may find yourself as the first person to encounter an ethical dilemma that doesn't have established precedents. If research appeals to you, start contributing to open-source AI security projects, publish your findings (even if they're small experiments), and go speak about it at conferences.
Lastly, while many of these areas are so new that there aren't established career paths yet, early adopters have the opportunity to literally define what these roles look like. The key is combining traditional cybersecurity expertise with enough AI/ML knowledge to analyze model behavior and trace AI-specific attacks. Whether you're aiming for hands-on defense, strategic governance, or cutting-edge research, the common thread is that you need to become comfortable with ambiguity and being out in front. These roles are evolving so rapidly that what you'll be doing in two years likely doesn't exist as a job description today. But that's a good thing!
This article appeared originally on LinkedIn here.