The double-edged sword of generative AI

Generative AI has captured the imagination of millions worldwide, largely driven by the recent success of ChatGPT, the text-generation chatbot. Our new research showed that globally, 67% of consumers have heard of generative AI technologies, and in some markets, like Singapore, almost half (45%) have used an application that uses them.

generative AI chatbot

While there have been many laudable applications of these technologies in healthcare, education, and even art, it’s essential to understand the broader way in which these technologies could be used and the potential risks they pose.

Low-cost, high-impact disinformation

Before sophisticated models like ChatGPT were publicly available, organized disinformation campaigns required significantly more resources to function. For serious operations, multiple individuals were required to run campaigns effectively.

However, developments in generative AI have made it much easier to create compelling fake news stories, social media posts and other types of disinformation quickly, and at a much lower cost. These systems can now generate content that is almost indistinguishable from that created by humans, making it difficult for people to detect when they are being exposed to false information.

For example, just one malicious actor could use a large language model to create a fake story that looks and reads like a legitimate news article, complete with quotes from fake sources and a convincing narrative. This type of content can be spread quickly and easily through social media, where it can reach millions of people within just a few hours.

This spread of disinformation can become even easier when social media platforms don’t put in place robust identity verification checks at the account opening stage. Numerous fake accounts could be opened to amplify these messages of disinformation at a much greater scale than was previously thought possible.

New fraud frontiers

As well as enabling disinformation, these technologies have also opened new possibilities for fraud and social engineering scams.

While our research showed that over half of consumers globally (55%) say they know that AI can be used to create audio deepfakes that mimic the voice of someone they know (personally or publicly) to trick them into providing sensitive information, money and more, 2022 saw over $11 million defrauded from these kinds of scams in the US alone.

Before the advent of generative AI-powered chatbots, scammers would typically use pre-made scripted responses or rudimentary chatbots to engage with their victims. These scripted responses were often irrelevant to the specific questions posed by potential victims, making it easier for them to detect the fraudulent nature of the interaction.

However, the state of generative AI technology now allows scammers to program chatbots to mimic human interaction convincingly. Drawing upon large language models, these chatbots can analyse the messages they receive, understand the context of the conversation and generate human-like responses devoid of the tell-tale language and grammar errors associated with older chat scams. In this way, generative AI-powered chatbots represent a significant departure from the past, with fraudsters able to extort information from their victims far more easily.

Fighting fire with fire

Despite the above-mentioned risks posed by generative AI, it is important to stress that AI itself can be an effective way to tackle these issues. It comes down to leveraging its power for identity verification and authentication.

For example, to target the issue of the scale and spread of disinformation, social media platforms can use multimodal biometrics to verify that users are genuine humans and are who they claim to be at the account opening stage. This will go a long way in stamping out fake accounts being opened to spread disinformation.

Multimodal biometric systems, which use multiple forms of biometric data, such as voice or iris detection, in conjunction with machine learning algorithms, can improve the accuracy of identity verification. These systems also have the additional benefit of liveness detection, which can help detect accounts made with face-morphs and deepfakes.

While no technology can truly prevent a consumer from falling for a generative AI-fuelled scam that convinces them to give away personal information – like those used as login details, for example – it can help prevent how that stolen information can then be used.

For example, if personal information is stolen to either access and takeover an existing online account, or set up a fraudulent account, if an online provider has a multimodal biometric-based verification or authentication system in place, the scammer wouldn’t be able to use those stolen details for much at all, as they would be stopped in their tracks when it comes to the biometrics element.

There are many potential benefits on offer when it comes to generative AI. However, these technologies have also lowered the barriers to propagating disinformation and simplified workflows for opportunistic fraudsters. But this is no reason to dismiss AI out of hand — there is promise in using AI-powered solutions themselves as preventative tools which can help users navigate the digital world with enhanced trust and confidence.

Don't miss