Generative AI: A benefit and a hazard

If there’s one thing people will remember about AI advances in 2022, it’ll be the advent of sophisticated generative models: DALL.E 2, Stable Diffusion, Midjourney, ChatGPT. They all made headlines – and they will change the way we work and live.

Generative AI hazard

Generative models will be integrated into the software we use every day. Sometime soon, we’ll be able to ask our email client to write a reply, ask our presentation software to generate an image or ask our word processor to write an intro to our latest report.

Machine learning models will generate more and more of the content we interact with

That will also be true of content created by adversaries. This moment presents more than an interesting thought experiment about how consciousness, society, and commerce may change. Our ability or inability to identify machine-generated content will likely have serious consequences when it comes to our vulnerability to crime.

From the perspective of language models such as ChatGPT, the ability to automate the generation of written content will inevitably interest criminals, especially cyber criminals. Likewise, anyone who uses the web to spread scams, fake news or misinformation may have an interest in a tool that creates credible, possibly even compelling, text at incredible speeds.

The large language models of today are perfectly capable of such things as crafting email threads suitable for spear-phishing attacks, “deepfaking” a person’s writing style, applying opinion to written content, or even creating very convincing fake articles (even if relevant information wasn’t included in the model’s training data.)

Models designed to generate images are already seeing widespread malicious use

For example, StyleGAN2, the model behind thispersondoesnotexist.com has supplied adversaries with hundreds of thousands of social media avatars. Language models will then complete those fake profiles by writing bios and posts. Additionally, AI art models will provide them with banners and convincing media.

Deepfakes have been around for a few years. While the most impressive (and believable) examples have, thankfully, thus far been created by experts as proof-of-concept works, the technology is improving. Furthermore, the price of a computer capable of generating deepfakes has also dropped, so it’s only a matter of time before a deepfake is created, slips under the radar, and is then used to convince people of something that didn’t happen.

When it comes to predicting how adversaries will use generative models, they will most likely balance cost versus revenue and only use them where appropriate. However, it’s obvious that the cost of using such techniques will lessen over time, and the techniques will become more sophisticated. And while some of the newer generative techniques haven’t yet been documented in malicious scenarios, it’s only a matter of time before they are.

What can we do to protect ourselves from the dangers of synthetically generated text, images, voices, and videos?

Since generative techniques will be used to create both benign and malicious content, simply detecting that an AI created something will not be enough to deem it malicious. To programmatically detect malicious content, we’ll need mechanisms that can understand and derive meaning from writing, images, and videos. This means having the means to detect online abuse, harassment, disinformation and fake news, as well as catching the spread and amplification of such things.

Phishing awareness and media literacy will become even more important in a world where an overwhelming amount of both benign and malicious content is generated by artificial intelligence. Spam will no longer contain spelling and grammatical errors; a perfectly written email asking you to click on a link may soon be the de facto standard for considering something suspicious. Fake social media accounts will no longer have generic faces as avatars. Fake news sites and company sites will appear much more realistic. AI will even write customer testimonials on those.

We’re sitting at the very beginning of the S-curve with respect to generative AI capabilities. By the end of this decade, technologies that far exceed those we see now will be integrated into our phones, home assistants, laptops, search engines, and more. They’ll be integrated into our very lives. And they’ll benefit adversaries every bit as much as they benefit us – if not more.

Don't miss