AI is becoming a core tool in cybercrime, Anthropic warns

A new report from Anthropic shows how criminals are using AI to actively run parts of their operations. The findings suggest that AI is now embedded across the full attack cycle, from reconnaissance and malware development to fraud and extortion.

AI-powered cybercrime

The report is based on real cases where Anthropic’s models were misused. It provides an unusual view into how attackers are adapting and building AI into every stage of their operations. While the focus is on Anthropic’s own model, the cases reflect a broader shift that applies across advanced AI systems.

AI as an active operator

One of the most striking findings is the way criminals are treating AI systems as operators inside their campaigns. In a case Anthropic calls “vibe hacking,” a single attacker used an AI coding agent to run a scaled data extortion campaign against at least 17 organizations in just one month. Targets included hospitals, government agencies, and emergency services.

Instead of using AI only for advice, the attacker gave it operational instructions, then relied on the model to make tactical and strategic choices. The AI scanned networks, harvested credentials, built malware to avoid detection, and even analyzed stolen data to decide which ransom amounts to demand. The model also generated customized ransom notes that reflected the victim’s industry, size, and regulatory exposure.

This shows that AI can collapse the gap between knowledge and execution. What once required a group of skilled operators can now be carried out by a single person directing a model. It also raises questions for defenders about how to measure attacker sophistication when AI can deliver expertise instantly.

AI across the entire attack lifecycle

Criminals are building AI into every stage of their work. Anthropic documented attackers using AI for reconnaissance, privilege escalation, malware obfuscation, data theft, and ransom negotiations.

One Chinese group was seen leveraging AI across nearly all MITRE ATT&CK tactics during a months-long campaign against Vietnamese critical infrastructure. The model served as a code developer, security analyst, and operational consultant throughout. That meant the attackers could quickly generate new exploits, automate scanning and data analysis, and plan lateral movement strategies.

The use of AI in so many phases creates two problems for defenders. First, attacks can move much faster, since AI removes manual bottlenecks. Second, AI-driven operations adapt quickly to defensive measures. Traditional assumptions that complex attacks require advanced operator skill are breaking down. A single actor with average skills can now orchestrate campaigns that look like the work of a well-funded team.

AI-driven fraud at scale

Beyond technical intrusions, the report highlights how AI is transforming fraud. Criminals are using models to analyze stolen data, build victim profiles, and run fraudulent services. Anthropic found cases where AI powered carding platforms, romance scams, and synthetic identity operations.

For example, one actor used AI to process massive amounts of stolen log data, turning it into behavioral profiles of victims. Another maintained a carding store that relied on AI to validate stolen credit cards at scale, with built-in resilience features usually seen in enterprise software. A separate case described a Telegram bot that used multiple models to craft emotionally convincing messages for romance scams, enabling non-native speakers to appear fluent and persuasive.

Together, these cases show that AI is lowering the technical barrier for entry into cybercrime, and creating fraud ecosystems that are more scalable, adaptive, and profitable. The tools allow criminals to offer services that look professional and reliable to other actors, while hiding the fact that their technical knowledge may be limited.

Don't miss