Abnormal Security CheckGPT detects AI-generated email attacks

Abnormal Security announced CheckGPT, a new capability to detect AI-generated email attacks.

The new capability determines when email threats, including business email compromise (BEC) and other socially-engineered attacks, have likely been created using generative AI tools.

Cybercriminals are constantly evolving their attack tactics to evade detection by security defenses, and generative AI is the newest weapon in their arsenal. Using tools like ChatGPT or its malicious cousin WormGPT, threat actors can now write increasingly convincing emails, scaling their attacks in both volume and sophistication. In its latest research report, Abnormal observed a 55% increase in BEC attacks over the previous six months—with the potential for volumes to increase exponentially as generative AI becomes more widely adopted.

“The degree of email attack sophistication is going to significantly increase as bad actors leverage generative AI to create novel campaigns,” said Karl Mattson, CISO at Noname Security. “It’s not reasonable that each company can become an AI security specialty shop, so we’re putting our trust in Abnormal to lead the way in that kind of advanced email attack detection.”

Unlike traditional email security solutions, Abnormal takes a radically different approach to stopping advanced email attacks, making it particularly well-suited to the challenge of blocking AI-generated attacks. The unique API architecture ingests thousands of diverse signals to build a baseline of the known-good behavior of every employee and vendor in an organization based on communication patterns, sign-in events and thousands of other attributes. It then applies advanced AI models including natural language processing (NLP) to detect abnormalities in email behavior that indicate a potential attack.

After initial email processing, the Abnormal platform expands upon this classification by further processing email attacks to understand their intent and origin. The CheckGPT tool leverages a suite of open source large language models (LLMs) to analyze how likely it is that a generative AI model created the message. The system first analyzes the likelihood that each word in the message has been generated by an AI model, given the context that precedes it. If the likelihood is consistently high, it’s a strong potential indicator that text was generated by AI.

The system then combines this indicator with an ensemble of AI detectors to make a final determination on whether an attack was likely to be generated by AI. As a result of this new detection capability, Abnormal recently released research showing a number of emails that contained language strongly suspected to be AI-generated, including business email compromise and credential phishing attacks.

“As the adoption of generative AI tools rises, bad actors will increasingly use AI to launch attacks at higher volumes and with more sophistication,” said Evan Reiser, CEO at Abnormal Security. “Security leaders need to combat the threat of AI by investing in AI-powered security solutions that ingest thousands of signals to learn their organization’s unique user behavior, apply advanced models to precisely detect anomalies, and then block attacks before they reach employees. While it’s important to understand whether an email was generated by a human or AI to understand and stay ahead of evolving threats, the right system will detect and block attacks no matter how they were created.”

More about

Don't miss