For years, the cybersecurity community has debated how quickly threat actors would adopt AI as an offensive tool. According to new research from Microsoft Threat Intelligence, that question has been answered—and the operationalization is more systematic than many expected.
Published on March 6th, the Microsoft Security Blog report, "AI as Tradecraft: How Threat Actors Operationalize AI," documents a fundamental shift in how adversaries work: AI is no longer an experimental add-on but a fully embedded operational layer, woven into attack chains from the first keystroke of reconnaissance to the final steps of data exfiltration and extortion.
The report's central finding is that AI functions as a force multiplier—reducing technical friction, compressing timelines, and enabling scale that wasn't previously possible—while human operators retain control over targeting and objectives.
No threat actor illustrates this more comprehensively than North Korea (DPRK).
Microsoft tracks two North Korean clusters—Jasper Sleet and Coral Sleet—whose operations offer the most detailed picture yet of what AI-enabled, revenue-driven cybercrime looks like at scale. Their goal is not a smash-and-grab intrusion; it is long-term, trusted employment inside Western organizations, sustained through identity fabrication sophisticated enough to fool HR departments, hiring managers, and colleagues.
The process starts well before any application is submitted. Jasper Sleet uses AI to scrape job postings from platforms like Upwork, then prompts language models to extract required skills, certifications, and role-specific language. That output is then used to tailor a fabricated identity to the exact profile a hiring manager wants to see. AI generates culturally appropriate name lists and matching email address formats on demand. Resumes and cover letters are AI-drafted and customized per application.
The identity fraud extends to visual media. Jasper Sleet has been observed using the AI application Faceswap to insert North Korean workers' faces into stolen identity documents and generate polished headshots for resumes—in some cases, reusing the same AI-generated photo across multiple personas with slight variations. During remote job interviews, voice-changing software masks accents, allowing operators to present as Western candidates.
Once hired, AI keeps the operation running. Operators use generative AI to translate workplace communications, craft contextually appropriate responses to colleagues, and generate code snippets when faced with unfamiliar technical domains—all to sustain the performance expectations of a legitimate employee. Microsoft notes that this mirrors how many real employees now use AI tools daily, making such behavior harder to flag as anomalous.
[RELATED: North Korean IT Workers Expand Global Reach and Tactics]
Meanwhile, Coral Sleet has used AI coding tools to generate, refine, and reimplement malware components at a pace that suggests rapid iterative development—including, Microsoft notes, instances of jailbreaking language models to produce malicious code that bypasses built-in safety controls. The same actor has built a convincing, high-trust web infrastructure at scale using AI-assisted development platforms, enabling fast staging, payload testing, and command and control (C2) operations that are significantly harder to detect and easier to refresh.
The North Korean activity is the report's most detailed case study, but Microsoft frames it within a broader taxonomy of how threat actors across the landscape are incorporating AI. The pattern holds regardless of actor: AI accelerates reconnaissance, scales social engineering, assists malware development, and streamlines post-compromise operations, including data triage, exfiltration planning, and monetization.
Vincenzo Iozzo, CEO of identity threat detection provider SlashID, says the adoption of adversarial AI is compressing the window defenders have to respond. "Breakout times are steadily decreasing, in large part because of AI-assisted offensive operations," he told SecureWorld. "When adversaries can move from initial access to lateral movement in minutes rather than hours, defenders need more comprehensive telemetry across their environments to detect breaches before they escalate."
Iozzo also pointed to documented cases of AI being embedded directly into malware logic—not just used to write it. The LameHug malware, tied to the Russian threat actor APT-28 and reported by Ukraine's CERT, communicates with a cloud-hosted instance of the Qwen large language model to receive dynamic C2 instructions, enabling real-time decision-making during lateral movement.
Microsoft is careful to characterize most observed threat actors' use of AI as generative—producing text, code, and synthetic media, with humans directing the work. But the report flags early signals of a shift toward agentic AI: systems that autonomously pursue multi-step objectives, invoke tools, evaluate outcomes, and adapt without continuous human prompting.
Large-scale agentic use has not yet been observed, Microsoft notes, due to reliability and operational risks. But proof-of-concept frameworks are already demonstrating the potential, and Ram Varadarajan, CEO of cyber deception firm Acalvio, argues the strategic implications are significant. "Legacy defenses are built for human attackers, and are now unable to fight back in either speed or scale against the agentic attacker," Varadarajan told SecureWorld. "Our cybersecurity future is bot-on-bot."
Microsoft's guidance focuses on three priorities. First, organizations should treat North Korean IT worker activity as an insider risk problem—focusing detection on misuse of legitimate credentials, abnormal access patterns, and sustained low-and-slow activity rather than traditional intrusion indicators. Second, phishing defenses should shift toward behavioral signals and analysis of delivery infrastructure rather than relying on linguistic patterns, since AI eliminates the grammatical errors and cultural tells that previously flagged malicious messages. Third, organizations deploying AI internally should actively govern how those tools are used by monitoring permissions, tracking the data fed into AI systems, and monitoring for prompt injection attempts.
Iozzo frames the defensive imperative plainly: visibility is the prerequisite for everything else. "The more data points an organization collects and correlates, the higher the probability of catching anomalous behavior in the shrinking window between compromise and impact.," he said.
The full Microsoft Threat Intelligence report is available at the Microsoft Security Blog.
Follow SecureWorld News for more stories related to cybersecurity.