“Since the beginning of 2023 until the end of April, out of 13,296 new domains created related to ChatGPT or OpenAI, 1 out of every 25 new domains were either malicious or potentially malicious,” Check Point researchers have shared on Tuesday.
On Wednesday, Meta said that, since March 2023, they’ve blocked 1,000+ malicious links leveraging ChatGPT as a lure from being shared across their technologies (Facebook, WhatsApp, etc.).
“To target businesses, malicious groups often first go after the personal accounts of people who manage or are connected to business pages and advertising accounts,” Nathaniel Gleicher, Head of Security Policy and Ryan Victory, Malware Discovery and Detection Engineer at Meta, explained.
To help users of their own plaftorms fight these threats, Meta is:
- Launching a new support tool that guides people step-by-step through how to identify and remove malware
- Rolled out an ability for businesses to have more visibility and control over administrator changes in Business Manager
- Expanded authorization requirements for sensitive business account actions.
Other lures and targets
“Threat actors may design their malware to target a particular online platform, including building in more sophisticated forms of account compromise than what you’d typically expect from run-of-the-mill malware. For example, we’ve seen malware families that can attempt to evade two-factor authentication or have the ability to automatically scan for and detect connections between the compromised account and business accounts it might be linked to.”
The malware they use – such as DuckTail, NodeStealer, and other infostealers – are after almost any login credentials or session cookies they can grab, which they will leverage to hijack accounts on various social media and online services to spread and host malware.
The malware steals data from browsers (Source: Meta)
“We’ve seen blocking and public reporting of these malicious strains force their operators to rapidly evolve tactics to try and stay afloat,” noted Meta’s engineers.
“We’ve seen them use cloaking in an attempt to circumvent automated ad review systems, and leverage popular marketing tools like link-shorteners to disguise the ultimate destination of these links. Many of them also changed their lures to other popular themes like Google’s Bard and TikTok marketing support. Some of these campaigns, after we blocked malicious links to file-sharing and site hosting platforms, began targeting smaller services, such as Buy Me a Coffee – a service used by creators to accept support from their audiences – to host and deliver malware.”