AI is becoming part of everyday criminal workflows
Underground forums include long threads about chatbots drafting phishing emails, generating code snippets, and coaching social engineering calls. A new study examined conversations captured between January 1, 2025 and July 31, 2025 across dozens of cybercrime forums to map how AI tools are entering day to day criminal operations.

The dataset includes 163 discussion threads drawn from 21 forums, totaling 2,264 messages posted by 1,661 distinct contributors. Much of the activity clustered on well known platforms such as XSS, BreachForums, Dread, and Exploit.in.
Four themes dominated the discussions: repurposing mainstream AI services, marketing criminal AI products, adapting models for specific operations, and debating operational risk.
Mainstream tools drive experimentation
Commercial chatbots serve as the starting point for many participants. ChatGPT appeared in 52.5 percent of the threads that mentioned legal AI products. DeepSeek followed at 27.9 percent, Claude at 19.7 percent, and Grok at 18.0 percent. Llama, Gemini, Mistral, Hugging Face, Manus AI, and WhiteRabbitNeo also appeared across multiple conversations.
Open-source and locally hosted models drew attention for privacy and fewer built in content restrictions. Participants described running models offline to draft scripts, refine phishing language, and explore attack concepts. Discussions also covered the hardware and time required to train or fine tune a model for offensive tasks, with several contributors citing long development cycles even when using high end consumer GPUs.
Jailbreaking remained common. Users shared prompts designed to bypass safety controls, including role play scenarios and instructions that attempt to override internal policies. Some threads focused on which models appeared more permissive during testing. Others described ways to obtain premium access through stolen or resold accounts. Listings included accounts with active subscriptions and instructions for abusing student verification flows.
Criminal AI brands crowd the forums
A second stream of activity involved tools marketed specifically for fraud, spam, and malware. Fifty threads centered on selling, requesting, or reviewing these products. Mentions clustered around a handful of names. WormGPT accounted for 26 percent of product mentions, FraudGPT for 18 percent, and DarkGPT for 16 percent. ChaosGPT, GhostGPT, and SpamGPT each appeared in 6 percent of mentions.
Many offerings functioned as wrappers that resell access to mainstream models through a bot interface or API gateway paired with a jailbreak prompt. Threads described short lived services, disputes over quality, and concerns about logging or hidden collection features. Sellers also advertised custom development. Some offered to host large language models for clients lacking infrastructure. Others promoted AI enabled calling systems designed to automate outbound fraud operations and handle victim interactions.
Monitoring how these products are marketed could offer early warning of broader adoption. Benoît Dupont, PhD, professor of criminology and co author of the study, told Help Net Security that defenders can track how often AI claims appear in underground sales listings.
“We could monitor forums, markets and Telegram channels to assess what share of malicious products and services on sale claim to be powered or enabled by AI,” Dupont said. “This claim is often central to secure a competitive advantage, so sellers are unlikely to obfuscate this in their offerings. If we were able to reliably measure the share of AI powered cybercriminal products and services advertised, we could track when certain thresholds are being reached and could certainly state with more confidence that we are leaving the experimental phase for a more industrial phase. Of course, advertising does not mean adoption, but this is an indicator of the direction things are going.”
Adaptation centers on scams and automation
Higher skill discussions focused on adapting AI to specific workflows. Participants described using chatbots to rehearse social engineering scripts tailored to a target organization. Others outlined tools that generate variable spam content to evade filters by altering phrasing and structure. Call center automation featured prominently. Posts detailed virtual assistants that support human operators in real time by suggesting responses, extracting one time passwords, and forwarding victims to live agents.
A smaller set of threads addressed malware development. Contributors emphasized the need for technical expertise to turn generated snippets into functioning payloads and delivery chains. Several elite forums added dedicated AI sections to concentrate discussion and attract specialists. Recruitment posts offered hourly assistance with model setup and integration into existing toolchains.
Dupont expects fraud operations to integrate AI faster than other categories of cybercrime.
“Social engineering and scamming operations will probably be able to leverage AI capacities more systematically, profitably and sooner than malware writing operations in the near future at least,” he said. “This very uncertain assessment is based on the fact that profit incentives and rewards are more accessible for AI enabled scams, but also because defensive AI systems are more systematically deployed to protect organizations, whereas individuals seem more exposed and benefit from limited AI enabled protection.”
Skepticism and operational risk
Skepticism ran through many conversations. Participants questioned the reliability of AI generated code for complex offensive tasks and cited frequent errors and hallucinated functions. Complaints about low quality forum posts generated by chatbots appeared across multiple communities, with members describing an increase in repetitive and derivative content.
Operational security concerns also surfaced repeatedly. Contributors treated prompts and chat histories as sensitive data that platform operators can monitor and store. Advice circulated about minimizing identifying details in queries and rotating accounts. Similar caution applied to criminal AI services, where buyers expressed concern about logging, hidden backdoors, and potential interception of stolen data.
Dupont said defenders can watch for measurable signals that point to scaled automation.
“The defensive signals that could be monitored could include the volume in certain types of scam reported by victims, providing we can access these reports in real time, volume of phishing, vishing and smishing messages intercepted, level of sophistication of these messages and calls, level of coordination of certain calling and messaging campaigns, volume of new account generation among certain digital platforms used to enable online scams, to name a few,” he said. “Any fraud signal that scales up and demonstrates high levels of coordination should be examined carefully to determine whether AI tools are at play.”
Across the seven month window, adoption clustered in fraud, scams, and social engineering workflows. A core group of innovators experimented with automation and new service models, and a wider set of users tested mainstream tools for drafting messages and refining scripts. The broader ecosystem shows an early stage of integration, with experimentation, marketing activity, and debate unfolding across multiple forums.