Flipping the BEC funnel: Phishing in the age of GenAI

For years, phishing was just a numbers game: A malicious actor would slap together an extremely generic (and usually poorly-written) email and fire it out to thousands of recipients in the hope that a few might take the bait. Over time, however, as spam filters and other email security tools became increasingly effective at filtering out such emails, threat actors adapted and began leveraging new techniques to circumvent these technologies.

GenAI tools

Common among these new techniques was a shift towards a more balanced approach to phishing, one emphasizing both quantity and quality. This shift gave rise to the advanced phishing techniques we know all too well today, like spear-phishing and business email compromise (BEC). Unlike the phishing tactics of yesteryear, these techniques make use of much more carefully crafted, convincing messaging tailored to deceive specific individuals, groups, or organizations.

This shift in phishing philosophies has also led to a precipitous decline in the use of malicious payloads (i.e., links or attachments) in phishing emails – presumably to avoid detection from the more capable email security solutions of today. Taken in conjunction with the larger trend towards more carefully crafted, narrowly targeted attacks, we find ourselves in a threat landscape increasingly dominated again by good old-fashioned social engineering.

While these “balanced” phishing techniques have been on the rise for some time, their scalability was historically limited by the time-consuming, labor-intensive processes of researching targets and crafting convincing emails. However, it appears this inherent constraint on scale is now a thing of the past, with the emergence of generative AI (GenAI) effectively flipping the funnel on phishing speed and scale.

Interestingly, researchers have been aware of GenAI’s potential for supercharging phishing campaigns since 2021, with some even publishing research demonstrating the ability of OpenAI’s ChatGPT to generate significantly more sophisticated and effective phishing emails in a fraction of the time.

The very next year, ChatGPT was released to the public on November 30, 2022. It became the fastest-growing consumer application of all time — going from zero to 100 million active users in less than two months. Almost as quickly, GenAI technologies found their place among the most formidable tools in the modern hacker’s arsenal.

Generative AI is opening the phishing floodgates

Now, over a year since GenAI tools entered the mainstream, they’ve managed to completely upend the traditional trade-off between quality and quantity that once held phishing content creation in check. With these technologies, attackers can now effortlessly craft expertly written social engineering content in mere seconds. Perhaps even more alarming is that malicious actors can use GenAI tools to generate content in a diversity of formats, styles, and languages, providing them with unprecedented versatility and bandwidth to scale their operations.

Although all of today’s leading commercial GenAI tools possess safeguards designed to prevent malicious use, recent research has demonstrated repeatedly that these guardrails are easily circumvented. Moreover, the security community has witnessed the emergence of GenAI tools explicitly designed for nefarious purposes, such as FraudGPT and WormGPT.

These tools empower threat actors by automating the development of highly personalized spear-phishing and BEC attacks that are not only grammatically correct, but also capable of adapting the text to various languages, contexts, and communication styles. Additionally, these tools expedite open-source intelligence (OSINT) gathering by swiftly collecting information about targets, including personal details, preferences, behaviors, and comprehensive company data.

As AI-driven threats evolve, recent developments such as OpenAI allowing users to create custom GenAI tools are a potential cause for concern. Such customization potential could enable bad actors to automate even more aspects of the phishing process, even while operating within the tool’s prescribed safeguards.

No matter which way you look at it, the floodgates appear to be opening. Indeed, our research paints a rather stark picture — from Q2 2022 to Q2 2023, BEC attempts increased by 23% in the United States and by 21% globally. Meanwhile, advanced email attacks overall increased by 24% over just the first two quarters of 2023.

The vast majority of organizations aren’t ready

Unfortunately, a significant majority of organizations appear ill-prepared to counter these emerging phishing threats. Chief among the concerns facing most organizations today is the record-high cybersecurity workforce gap, with an estimated need for an additional 4 million professionals worldwide to protect digital assets, as reported by ISC2. The same report reveals that nearly half (48%) of organizations today lack the tools and talent to respond to cyber incidents effectively.

Furthermore, the ISC2 study shows that today’s cybersecurity professionals are feeling less than confident about the current threat landscape. A staggering 75% of them assert that the present threat landscape is the most formidable they’ve encountered in the past five years, and 45% anticipate that artificial intelligence (AI) will pose their greatest challenge in the next two years. This outlook underscores the urgency for organizations to fortify their cybersecurity defenses and adapt to the rapidly evolving nature of cyber threats.

Our analysis found over 8 million phishing attempts successfully evaded native defenses in 2022 alone. Astonishingly, nearly 88% of these malicious messages were classified as “unknown threats”— meaning, advanced phishing attacks without malicious links or attachments, reliant solely on social engineering tactics.

The imperative of AI-enhanced security

It’s becoming increasingly apparent that the only reliable way to combat this rising tide of advanced phishing threats is to fight fire with fire — that is, to leverage AI and machine learning-enabled email security solutions as defensive measures against this rapidly changing, increasingly-challenging threat landscape.

AI and machine learning-enabled solutions are unique in that they are not only more capable of detecting threats directly (including previously unknown threats), but are also unparalleled in their ability to learn and adapt, ensuring their efficacy improves over time. What’s more, these solutions utilize AI’s ability to create and fine-tune behavioral profiles of specific users so that they can more reliably detect anomalous activity that may be indicative of a compromised account and other types of impersonation.

So, when Bob — the famously brusque head of accounting — sends you an uncharacteristically cordial email about redirecting payment on an invoice, AI-enabled security tools will recognize the irregularity and flag it as potentially malicious. Over time, as these tools are exposed to more and more of an organization’s communications, the better they become at detecting such anomalies, such as word choice, syntax, sentence structure, and length, along with countless other parameters that a human reader would most likely overlook.

However, AI-enabled security tools are most effective when used to empower, not replace, human personnel — to help employees be more effective, reliable defendants of organizational security. One often-overlooked way in which they do this is by streamlining the day-to-day operations of SOC teams and automating away the kinds of tedious routine tasks that so often monopolize significant amounts of security professionals’ time and energy.

All told, AI-enabled tools offer unparalleled adaptability, efficiency, and detection capabilities — all while making life easier for the often overworked, overwhelmed, and understaffed SOC teams that remain so essential to our world’s collective security posture.

Given the increasing complexity of AI-powered attacks, the need for email security solutions enhanced by adaptive AI has become apparent.

In addition, it’s essential to explore how employees can collaborate with AI-enhanced email security tools. While AI significantly enhances security, it’s not foolproof. Employees play a critical role in scrutinizing flagged emails, engaging with email chatbots for context, and contributing their insights to catch highly sophisticated emails that might circumvent security. This human-AI collaboration ensures a more comprehensive defense against malicious emails while minimizing the risk of false positives, ultimately strengthening an organization’s cybersecurity posture.

Human insight: The email security co-pilot

In addition to deploying the right AI security tools, every CISO should prioritize security awareness training and phishing simulation testing. As phishing tactics evolve, employees may become their company’s last line of defense against novel attacks. To build broader employee knowledge of trending phishing tactics, it’s crucial to develop and implement ongoing training and testing programs.

What an effective training program ultimately looks like will depend largely on the unique needs of your organization. But no matter what those circumstances may be, the following three qualities are essential for any program to be successful:

1. Frequency: Traditional annual awareness training is simply not enough. Continuous and regular training is essential to keeping security top-of-mind among the workforce.

2. Relevancy: Training and simulations must stay up-to-date and reflect the current threat landscape and the latest attack methodologies. Real-world attack scenarios should be the foundation, ensuring that employees can recognize and respond to the tactics being actively used by today’s threat actors.

3. Personalization: While traditional security awareness training is essential for establishing a shared foundation of knowledge across an organization, it’s important to recognize that employees will differ in both specific knowledge and reliability. As a first step, companies should use phishing simulation testing to establish a performance baseline for each employee. From there, organizations can offer more targeted training simulations tailored to each employee, based on their experience, knowledgeability, department, title, and so on.

Conclusion

The landscape of phishing attacks has evolved significantly in recent years, with threat actors employing more advanced techniques that target specific individuals, groups, or organizations with the scale and sophistication that many legacy email solutions cannot protect against. Attackers have become adept at personalizing their attacks, using publicly available information to craft convincing messages that deceive even the most vigilant professionals.

To defend against these evolving threats, organizations and professionals must remain vigilant and proactive. This includes ongoing education and training and the implementation of robust AI-powered email security solutions. By staying informed and prepared, organizations can significantly reduce their vulnerability to these advanced phishing techniques and protect their valuable assets from cybercriminals.

Don't miss