Bracing for AI-enabled ransomware and cyber extortion attacks

AI has been the shiniest thing in tech since at least November 2022, when ChatGPT was made available to the masses and unveiled the transformative potential of large language models for all the world to see.

AI-enabled attacks

As businesses scramble to take the lead in operationalizing AI-enabled interfaces, ransomware actors will use it to scale their operations, widen their profit margins, and increase their likelihood of pulling off successful attacks. As a result, an already sophisticated business model of encryption-less extortion will further benefit from AI advancements, exacerbating the threat to both public and private organizations.

We are facing a future where the same technologies we’ve recently come to use to direct help desk inquiries or help reserve a table at a restaurant may be used by ransomware groups to improve their social engineering tactics and technical skills.

In a dark parody of legitimate organizations, in the coming years ransomware groups may use chatbots and other AI-enabled tools to:

  • Use AI voice cloning for voice-based phishing (a.k.a., vishing) attacks to impersonate employees to gain privileged access
  • Tailor email-based phishing attacks with native language accuracy in multiple languages
  • Discover and identify zero-day vulnerabilities that can be leveraged for initial access
  • Reduce the time required to develop malicious code and lower the bar for entry

When AI-enabled capabilities are coupled with potent malware, we should expect cybercriminals to double down on ransomware as a means of generating revenue rather than abandoning it in favor of something new.

An unneeded leg-up

Findings from Zscaler’s ThreatLabz threat intelligence team suggest ransomware actors are doing just fine without the added firepower. Researchers have charted a 37% rise in ransomware incidents in 2023 in the Zscaler cloud (250% over the past two years), a triple-digit increase in double-extortion tactics across numerous industries, and an overall surge in sector-specific attacks targeting industries like manufacturing. Public sector organizations are also emerging as favored targets.

In addition to state-sponsored attacks by APTs, governments must deal with their fair share of criminal activity as well, particularly at lower levels of government where cybersecurity resources are especially scarce. This includes attacks against police departments, public schools, healthcare systems, and others. These attacks ramped up in 2023, a trend we expect to continue as cybercriminals look to easy targets from which to steal sensitive data like PII.

Ransomware groups’ success is often less about technological sophistication and more about their ability to exploit the human element in cyber defenses. Unfortunately, this is exactly the area where we can expect AI to be of the greatest use to criminal gangs. Chatbots will continue to remove language barriers to crafting believable social engineering attacks, learn to communicate believably, and even lie to get what they want. As developers release ethically dubious and amoral large language models in the name of free speech and other justifications, these models will also be used to craft novel threats.

These dangers have been highlighted repeatedly since ChatGPT captured our collective attention nearly a year ago, but how they will make ransomware actors’ lives easier bears special emphasis. Without integrating AI into our security solutions, already rampant ransomware activity could become even more disruptive.

Breaking the chain

Successful ransomware attacks tend to follow a depressingly similar attack pattern.

Threat actors probe target organizations for an exposed attack surface during the reconnaissance phase. IP-based technologies like VPNs and firewalls often make this a trivially simple process using search-engine-like tools for discovering internet-facing devices. In connected environments, IoT/OT devices designed without consideration for security also help to enable the initial compromise.

As discussed, ransomware actors may increasingly rely on AI-enabled technologies to discover vulnerabilities or to create spear phishing emails. After establishing a foothold on an organization’s network, ransomware groups move laterally in search of high-value data worthy or paying a ransom to regain control. Finally, data is encrypted or exfiltrated to ensure additional leverage over the victimized organization.

Luckily, there is a role for AI to play in thwarting this well-established process by adding capabilities at each step:

  • Minimize the attack surface – AI-assisted scans search the environment for exposed assets, providing a dynamic risk score for the organization and recommended remediation steps. This smart discovery process ensures sensitive assets aren’t easily discoverable by threat actors conducting reconnaissance.
  • Prevent compromise – Risk-based policy engines informed by AI analysis can help organizations fine-tune enforcement to match their risk appetite. It also assists with inline inspection of encrypted traffic (where most ransomware hides) and limit the damage of any malicious activity with capabilities like smart cloud browser isolation and sandboxing.
  • Eliminate lateral movement – AI-powered policy recommendation based on training data from millions of leveraging private app telemetry, user context, behavior, and location will simplify the process of user-to-app segmentation
  • Stop data loss – AI-assisted data classification will help organizations tag sensitive data and enforce strict controls against uploading it to cloud storage. This capability should be capable across several file formats, with capabilities eventually extending to video and audio.

These are only a few examples of where AI will assist in disrupting the cyber-attack chain, and it will play other roles, such as in automating root cause analysis to fortify organizations against future attacks. Protection should also include an education component to raise awareness of the potential for malicious chatbots in help desk and other frontline service functions.

If we are ultimately headed for a future where cyber criminals use AI to deploy ransomware more effectively, it’s essential security teams similarly innovate to bolster their defenses. As with our adversaries, AI will be key to doing so.

Don't miss