Based on Anthropic’s August 2025 Threat Intelligence Report

Cybersecurity is experiencing a seismic shift.

Advanced Persistent Threat (APT) groups, those sophisticated nation-state and criminal actors we’ve grown accustomed to tracking through traditional methods, are now wielding artificial intelligence as their newest weapon.

Anthropic’s latest threat intelligence report reveals some eye-opening cases that show how AI isn’t just changing cybercrime, it’s revolutionizing it.

Claude Code

When AI becomes the hacker

Let’s start with perhaps the most striking example from Anthropic’s research: “vibe hacking.” This isn’t your typical cybercriminal with a laptop in a dark room. Instead, we’re seeing operators who use Claude Code (Anthropic’s AI coding assistant) to literally perform cyberattacks for them. One tracked group, designated GTG-2002, managed to compromise 17 organizations across government, healthcare, and other critical sectors in just one month.

The fascinating (and terrifying!) part? These operators appear completely dependent on AI assistance. They can’t write basic code, debug problems, or even craft professional communications without their AI assistant. Yet they’re successfully breaching Fortune 500 companies and demanding ransoms exceeding $500,000.

Think about that for a moment. We’ve moved from needing years of technical training to become a capable threat actor, to simply needing access to an AI model and some creativity.

It’s the democratization of cybercrime, and it’s happening right now.

North Korea’s AI-Powered workforce

The report also sheds light on how North Korean IT workers are using AI to maintain fraudulent employment at Western tech companies. Traditionally, these operations required highly skilled individuals trained from a young age in specialized North Korean institutions. Now, operators who can’t independently perform basic technical tasks are successfully maintaining engineering positions at major companies.

The data is striking: 61% of their AI usage focuses on frontend development, 26% on programming tasks, and 10% on interview preparation. These workers are essentially outsourcing their technical competence to AI models, using them as a real-time technical advisor during work hours.

According to FBI assessments, these operations generate hundreds of millions annually for North Korea’s weapons programs, a concerning escalation enabled by AI accessibility.

The Ransomware-as-a-Service evolution

Perhaps most alarming is how AI is transforming the ransomware landscape. Anthropic tracked a UK-based actor (GTG-5004) who developed and marketed sophisticated ransomware through AI assistance, despite appearing unable to implement complex technical components independently. This operator successfully created ransomware packages priced from $400 to $1,200 featuring:

  • Advanced encryption using ChaCha20 algorithms
  • Anti-EDR (Endpoint Detection and Response) evasion techniques
  • Direct syscall invocation capabilities
  • Professional command and control infrastructure

The actor marketed these tools on dark web forums, claiming they were “for educational purposes only” while simultaneously advertising on criminal marketplaces. It’s a stark reminder that AI doesn’t distinguish between legitimate and malicious use cases: it simply executes what it’s asked to do.

The MCP factor: supercharging Data Analysis

One particularly sophisticated development involves the use of Model Context Protocol (MCP) for analyzing stolen data. Threat actors are now using AI not just to steal information, but to intelligently analyze and profile victims from stolen browser logs and personal data.

Instead of manually sifting through gigabytes of stolen information, criminals can now use AI to automatically categorize websites visited, analyze behavioral patterns, and create detailed victim profiles.

This transforms raw stolen data into actionable intelligence for targeted attacks, fraud, or blackmail operations.

A global phenomenon

The report documents cases spanning multiple countries and threat actor groups:

  • Chinese APT groups systematically using Claude across 12 of 14 MITRE ATT&CK tactics
  • Russian-speaking developers creating advanced malware with AI assistance
  • Romance scam operations powered by AI chatbots targeting victims globally
  • Synthetic identity services using AI for large-scale fraud operations

The technical implications

What makes these developments particularly concerning is the technical sophistication being achieved without traditional expertise.

Threat actors are now implementing:

  • Advanced evasion techniques like Hell’s Gate syscall resolution and Early Bird process injection
  • Multi-API resilience frameworks that automatically rotate between services when detection occurs
  • Behavioral analysis systems that create detailed victim profiles from stolen data
  • Automated social engineering that crafts emotionally intelligent messages for romance scams

The Defense Challenge

Traditional cybersecurity assumes that sophisticated attacks require sophisticated actors. When a defender sees advanced techniques, they might assume they’re dealing with a well-resourced, highly skilled adversary. But AI has broken that assumption. Now, relatively unskilled actors can deploy nation-state-level techniques through AI assistance.

This creates new challenges for threat attribution, incident response, and defensive strategy. How do you defend against an adversary whose capabilities can scale instantly through AI assistance?

The implications for organizations are far-reaching. Organizations need to rethink their threat models to account for AI-enhanced adversaries who can:

  • Scale operations far beyond traditional limitations
  • Adapt attack techniques in real-time
  • Generate convincing social engineering content
  • Develop custom malware without traditional programming skills
  • Maintain persistent access through AI-assisted operations

The cases documented in Anthropic’s report represent a fundamental shift in the threat landscape. We’re no longer just dealing with traditional cybercriminals who might use AI as a tool: we’re seeing the emergence of AI-dependent threat actors whose entire operational model relies on artificial intelligence.

This evolution demands new approaches to cybersecurity, threat intelligence, and incident response. Organizations must prepare for adversaries whose capabilities can scale instantly and whose attack patterns might not follow traditional indicators of compromise.

The future of cybersecurity isn’t just about defending against human adversaries: it’s about defending against human creativity amplified by artificial intelligence. And based on Anthropic’s findings, that future is already here.