Cybercriminals get productivity boost with AI

While AI technology has the potential to streamline and automate processes for beneficial outcomes, it also comes with an equal number of risks to data protection, cybersecurity, and other ethical concerns, according to iProov.

AI technology cybersecurity risks

Digital ecosystems continue to grow and multiply at record levels as organizations and governments seek to provide remote access and services to meet consumer and workforce demand. However, this growth’s unintended side effect is an ever-expanding attack surface that, coupled with the availability of easily accessible and criminally weaponized generative AI tools, has increased the need for highly secure remote identity verification.

Cybercriminals are using advanced AI tools

The new threat report from iProov reveals how bad actors are using advanced AI tools, such as convincing face swaps in tandem with emulators and other metadata manipulation methodologies (traditional cyberattack tools), to create new and widely unmapped threat vectors.

Face swaps are created using generative AI tools and present a huge challenge to identity verification systems due to their ability to manipulate key traits of the image or videos. A face swap can easily be generated by off-the-shelf video face-swapping software and is harnessed by feeding the manipulated or synthetic output to a virtual camera. Unlike the human eye, advanced biometric systems can be made resilient to this type of attack.

However, in 2023, malicious actors exploited a loophole in some systems by using cyber tools, such as emulators, to conceal the existence of virtual cameras, making it harder for biometric solution providers to detect. This created the perfect storm with attackers making face swaps and emulators their preferred tools to perpetrate identity fraud.

“Generative AI has provided a huge boost to threat actors’ productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions. This only serves to heighten the need for highly secure remote identity verification,” says Andrew Newell, Chief Scientific Officer, iProov.

“While the data in our report highlights that face swaps are currently the deepfake of choice for threat actors, we don’t know what’s next. The only way to stay one step ahead is to constantly monitor and identify their attacks, the attack frequency, who they’re targeting, the methods they’re using, and form a set of hypotheses as to what motivates them,” added Newell.

The use of emulators and metadata spoofing by threat actors to launch digital injection attacks across different platforms was first observed by the iProov Security Operations Center (iSOC) in 2022 but continued to dominate in 2023 growing by 353% from H1 to H2 2023. An emulator is a software tool used to mimic a user’s device, such as a mobile phone. These attacks are rapidly evolving and pose significant new threats to mobile platforms: injection attacks against mobile web surged by 255% from H1 to H2 2023.

New trends discovered

Across 2022 and 2023, indiscriminate attack levels ranged from 50,000 to 100,000 times per month. There was also a considerable increase in the number of actors and an improvement in the sophistication of the tools used.

A significant growth in the number of groups engaged in exchanging information related to attacks against biometric and remote human identification or “video identification” systems was also observed, evidencing the collaborative approach now being adopted by threat actors. Of the groups identified by iProov’s analysts, 47% were created in 2023.

There are two primary attack types observed by the iSOC: presentation attacks and digital injection attacks.

A significant increase in packaged AI imagery tools deployed which make it far easier and quicker to launch an attack and this is only expected to advance.

There was a 672% increase from H1 2023 to H2 2023 in the use of deepfake media such as face swaps being deployed alongside metadata spoofing tools. Presentation and digital injection attacks may have different levels of impact, but they can pose a significant threat when combined with traditional cyberattack tools like metadata manipulation.

Don't miss