Polymorphic phishing attacks flood inboxes
AI is transforming the phishing threat landscape at a pace many security teams are struggling to match, according to Cofense.
In 2024, researchers tracked one malicious email every 42 seconds. Many of the 42-second attacks were part of polymorphic phishing attacks.
Unlike traditional phishing methods, polymorphic phishing attacks rely on dynamic changes to the appearance and structure of malicious emails or links. Attackers use sophisticated algorithms to alter subject lines, sender addresses, and email content in real time, effectively bypassing static signature-based email filters.
40% of malware detected in 2024 was newly observed, with nearly half classified as Remote Access Trojans (RATs). These versatile threats enable persistent access and signal a shift toward more complex, multipurpose attacks. By automating code generation, Al streamlines the development of sophisticated malware, reducing the technica expertise required by attackers.
BEC attacks are on the rise
Email-based scams surged 70% year-over-year, driven by AI’s ability to automate lures, spoof internal conversations, and bypass spam filters with subtle text variations.
Threat actors are now using AI to craft convincing emails that impersonate C-suite executives, often mimicking real forwarded threads and referencing payment approvals. These messages are sent from lookalike domains such as “@consultant.com,” and because they’re written by AI, they contain fewer typos, inconsistent formatting, or phrasing that would normally raise suspicion.
Industries with the largest increase in reported malicious emails:
- Education: 341%
- Construction: 1,282%
- Taxes-related campaigns: 340%
- Campaigns utilizing legitimate files: 575%
Microsoft has been identified as the most frequently spoofed brand in 2024.
GenAI targeting perfection
The rise of GenAI tools has introduced a new era of customization in cyberattacks, enabling threat actors to craft highly targeted and cosmetically flawless campaigns at scale with minimal effort.
By analyzing publicly available data, such as company names and job titles, from social media platforms, leaked databases, and online footprints, cybercriminals can create customized messages that resonate with specific targets.
For instance, an AI-generated phishing email might reference a victim’s recent purchases, professional affiliations, or interests, thereby increasing the likelihood of engagement.