Emerging threat: AI-powered social engineering

Social engineering is a sophisticated form of manipulation but, thanks to AI advancements, malicious groups have gained access to highly sophisticated tools, suggesting that we might be facing more elaborate social engineering attacks in the future.

It is becoming increasingly evident that the current “don’t click the link” training approach will not suffice to tackle the evolving nature of social engineering.

AI social engineering

The implementation of LLMs by malicious actors

Large language models (LLM) like ChatGPT are trained on vast amounts of text data to generate human-like responses and perform various language-related tasks. These models have millions or even billions of parameters, allowing them to understand and generate text in a coherent and contextually relevant manner.

ChatGPT has become a powerful tool in malicious actors’ arsenal. The days of poorly worded, error-ridden emails cluttering our spam boxes may soon be gone. The text can now be enhanced and refined, making emails sound more convincing.

It’s worth noting that many phishing emails are crafted by non-native English speakers, as numerous hacking organizations operate outside of English-speaking countries. LLMs like ChatGPT allow these individuals to rewrite phishing emails to better match their target audience’s language and context.

The recipients of such emails frequently include individuals responsible for financial matters or possessing influential positions within the organization, enabling them to execute transactions. Well-crafted emails tend to yield higher rates of success. WormGPT is an AI model available on the dark net, and it’s designed to create texts for hacking campaigns. This means malicious actors do not have to worry about their accounts being blocked – they can produce any type of content with it.

Deepfakes are good but not (yet) flawless

Deepfake videos use AI and deep learning techniques to create highly realistic but fake or fabricated content. Deepfakes often involve replacing the faces of individuals in existing videos with other people’s faces, typically using machine learning algorithms known as generative adversarial networks (GANs). These advanced algorithms analyze and learn from vast amounts of data to generate highly convincing visual and audio content that can deceive viewers into believing that the manipulated video is authentic.

The most effective evaluation of deepfake technology can be made when watching videos in which the “deepfaked” person is a celebrity or individual whom the viewer is visually familiar with. Within the realm of available deepfake technology, Deepfakes Web, the well-known deepfake generator, falls short. It is immediately apparent that there is something wrong with the videos.

However, DeepFaceLab, another software for creating deepfakes, is a different story. This technology serves as the tool for most current deepfakes, offering greater believability that hinges on the skills of the creator. This Lucy Liu deepfake created with DeepFaceLab is particularly impressive.

The challenge of achieving believability in deepfake videos lies in accurately replicating hair and facial features. When the canvas for the deepfake possesses a significantly different hairline or facial structure, the resulting deepfake appears less convincing.

However, malicious actors find themselves fortunate in this regard. There is an abundance of aspiring actors who are willing to have their videos recorded and their appearances altered. Furthermore, there is no shortage of individuals who are open to being recorded engaging in various activities, especially when they are assured that their identity will never be exposed.

The current use of deepfakes is even more worrying than the availability of the tools to create them. Shockingly, around 90% of deepfakes are used for nonconsensual pornography, particularly for revenge purposes. What compounds the issue is the absence of specific laws in Europe to protect the victims.

A potent method for blackmail

Imagine if someone were to capture fake hidden camera footage and utilize AI to replace the participants’ faces with that of the victim. Although the footage is fabricated, explaining the situation to a spouse or a boss becomes an incredibly difficult task. The possibilities for compromising individuals are boundless.

As malicious actors gain the upper hand, we could potentially find ourselves stepping into a new era of espionage, where the most resourceful and innovative threat actors thrive. The introduction of AI brings about a new level of creativity in various fields, including criminal activities.

The crucial question remains: How far will malicious actors push the boundaries? We must not overlook the fact that cybercrime is a highly profitable industry with billions at stake. Certain criminal organizations operate similarly to legal corporations, having their own infrastructure of employees and resources. It is only a matter of time before they delve into developing their own deepfake generators (if they haven’t already done so).

With their substantial financial resources, it’s not a matter of whether it is feasible but rather whether it will be deemed worthwhile. And in this case, it likely will be.

What preventative measures are currently on offer? Various scanning tools have emerged, asserting their ability to detect deepfakes. One such tool is Microsoft’s Video Authenticator Tool. Unfortunately, it is currently limited to a handful of organizations engaged in the democratic process.

Another free tool is the Deepware free deepfake scanner, which has been tested with YouTube videos, revealing its proficiency in scanning and recognizing known deepfakes. However, when presented with content that is real, it struggles to perform accurate scans, raising doubts about its overall effectiveness. It seems it has been mainly trained on known deepfakes and struggles to recognize anything else.

Additionally, Intel claims its FakeCatcher scanner has a 96% accuracy in deepfake detection. However, given that most existing deepfakes can already be recognized by humans, one may question the actual significance of this claim.

Voice fakes also pose a significant threat to organizations

Voice fakes are artificially generated or manipulated audio recordings that aim to imitate or impersonate someone’s voice. Like deepfake videos, voice fakes are generated with advanced machine learning techniques, particularly speech synthesis and voice conversion algorithms. The result is highly convincing audio that mimics a specific individual’s speech pattern, tone, and nuances.

Voice fakes can be created based on just a few seconds of audio. However, to effectively deceive someone who knows the individual well, longer recordings are required. Obtaining such recordings becomes simpler when the targeted person maintains a strong online presence.

Alternatively, adept social engineers can skillfully engage individuals in conversations lasting over a minute, making the acquisition of voice samples relatively effortless. Currently, voice fakes possess a higher degree of believability than deepfakes, where research into the target’s speech patterns only enhances the probability of a successful attack.

Consequently, we find ourselves in a situation where the success of such attacks relies on the extent of effort that malicious actors are willing to invest. This evolving landscape may have profound implications for so-called whale phishing attacks, where high-profile figures are targeted. These types of social engineering attacks garner the utmost attention and allocation of resources within malicious organizations.

With the threat that voice fakes pose, it is becoming evident that implementing two-factor authentication for sensitive phone calls, where transactions or the sharing of sensitive information occur, is essential. We are entering a digital communication landscape where the authenticity of any form of communication may be called into question.

Should we pen test humans?

As AI becomes increasingly integrated into everyday life, it naturally becomes intertwined with the cybersecurity landscape. While the presence of voice fake and deepfake scanners is promising, their accuracy must be thoroughly tested. It is reasonable to anticipate that pen testing efforts will increasingly focus on AI, leading to a shift in some security assessments.

Evaluating the online presence of high-profile individuals and the ease of creating convincing deepfakes may soon become integral to cybersecurity and red team engagements. We might even see incident prevention and response teams specifically dedicated to combating social engineering attacks.

Currently, if someone falls victim to extortion through a deepfake, where can they turn for help? They certainly won’t approach their employer and say, “There might be a sensitive video circulating, but don’t worry, it’s just a deepfake.” However, having a team capable of addressing this issue confidentially and mitigating the impact of such attacks on individuals could become a vital service for companies to consider.

While the transformative power of the new AI-driven world on the cybersecurity landscape is evident, the exact nature of these changes remains uncertain.

Don't miss