AI weaponization becomes a hot topic on underground forums

The majority of cyberattacks against organizations are perpetrated via social engineering of employees, and criminals are using new methods including AI to supercharge their techniques, according to ReliaQuest.

AI automation threat

Some 71% of all attacks trick employees via the use of phishing, and of particular concern is a sharp rise in QR code phishing, which increased 51% last year compared to the previous eight months. Employees are also being duped into downloading fake updates – often to their web browser.

Drive-by compromise has been traditionally defined as the automatic download of a malicious file from a compromised website without user interaction. However, in most cases reviewed during the reporting period, user action was involved—facilitating initial access in nearly 30% of incidents.

Threat actors automate attacks with AI

The use of AI to accelerate these attacks is gaining significant attention among major cybercriminal forums with growing interest in weaponizing this technology.

Researchers have found dedicated AI and machine learning sections of these sites, which detail criminal alternatives to mainstream chatbots, such as FraudGPT and WormGPT, and hint at the development of simple malware and distributed denial of service (DDoS) queries using these options.

AI systems can now replicate a voice using a sample, and video-call deepfakes are aiding threat actors. Additionally, researchers have noted that a growing number of threat actors are automating various stages of their attacks, or the entire attack chain – particularly the Citrix Bleed exploitation.

However, while AI-powered automation is being leveraged by attackers, it has also delivered a step change in defensive capabilities among organizations.

Criminals prioritize financial theft in 2023

Financial theft stood out as the primary objective of criminals in 2023, driving 88% of customer incidents. Extortion activity increased by 74%, with a record 4,819 compromised entities named on data-leak websites from ransomware groups, with LockBit alone accounting for 1,000-plus entities.

ReliaQuest noted a significant threat from suspected nation state actors using so-called ‘living off the land’ (LotL) techniques. In such incidents threat actors seek to hide their activity via defense-evasion techniques, such as log clearing and infiltrating PowerShell. In an intrusion researchers observed in April 2023, a Chinese state-sponsored threat group primarily focused on using LotL commands to blend into a company’s environment. The group’s discreet LotL activity allowed access for more than a month.

“As the threat continues to evolve, defenders must stay agile, using AI and automation to keep pace with the latest attack techniques. Time is the enemy in cybersecurity. To proactively protect against these risks, companies should maximize visibility across their networks and beyond the endpoint, fully leverage AI and automation to better understand and use their own data, and equip their teams with the latest threat intelligence. With this approach, in the next year we expect customers who fully leverage our AI and automation capabilities to contain threats within 5 minutes or less,” said Michael McPherson, ReliaQuest’s SVP of Technical Operations.

Cybersecurity in 2024 will be heavily influenced by GenAI and the creation of malicious AI models, and widespread automation in cyberattacks that enhance threat actors’ capabilities. Automated dynamic playbooks will grant even unskilled attackers sophisticated ways to expedite operations, shortening the time from breach to impact.

Don't miss