Afraid of AI? We should be

Not (yet!) of a sentient digital entity that could turn rogue and cause the end of mankind, but the exploitation of artificial intelligence and machine learning for nefarious goals.

afraid of AI

What sorts of AI-powered attacks can we expect to see soon if adequate defenses are not developed?

According to a group of 26 experts from various universities, civil society organizations, and think-tanks, the threat landscape can undergo dramatic changes in the next five to ten years.

“The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence, and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets,” they noted in a recently released report.

“New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.”

And it’s not only out digital security that will be under attack, but our physical and political security can suffer, as well.

Plausible attack scenarios

The experts have come up with extremely plausible scenarios for attacks powered by artificial intelligence: automated social engineering attacks, automated vulnerability discovery, human-like denial-of-service, data poisoning attacks to surreptitiously maim or create backdoors in consumer machine learning models, and so on.

In the physical realm, AI can be used to increase the scale of attacks, power swarming attacks, or increasingly remove attacks from the actors initiating them.

Finally, AI can be used to create extremely targeted propaganda, to manipulate audio and video messages (creating fake news, impersonating targets), to automate hyper-personalised disinformation campaigns, and more. Some of these approaches are already used by various states, but with AI they could become even more effective.

“We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates,” the experts pointed out.

We have to do something, and we have to start now

“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions, and individuals across the globe,” says Dr. Seán Ó hÉigeartaigh, Executive Director of Cambridge University’s Centre for the Study of Existential Risk and one of the co-authors of the report.

“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”

They call on policymakers to collaborate closely with technical researchers to investigate and mitigate potential malicious uses of AI and say that these problems and solutions should be discussed by a wide range of stakeholders and domain experts.

It’s also of crucial importance that AI researchers and engineers keep in mind the dual-use nature of their work, and allow their research priorities and norms to be influenced by misuse-related considerations.

“AI researchers and the organisations that employ them are in a unique position to shape the security landscape of the AI-enabled world. We highlight the importance of education, ethical statements and standards, framings, norms, and expectations,” the experts advised.

They also urge them to learn from and with the cybersecurity community, identify best practices in research areas with more mature methods for addressing dual-use concerns and import them (where applicable), and to explore different openness models for the research.

Don't miss