AI vs. AI: Cybersecurity battle royale

David and Goliath. The Invasion of Normandy. No matter the generation, we all know some of the storied battles that have withstood the test of time. In cyberspace, however, there’s a fierce battle brewing surrounding artificial intelligence.

With AI projected to become a $190 billion industry by 2025 (according to Markets and Markets), it is more integrated in our everyday lives than we may even notice at this stage – and it continues to gain popularity. AI has found its way into home appliances, medical imagery, natural language processing and even musical composition.

One area that AI has remained a constant is cybersecurity, where its continual learnings help detect and combat cyberthreats. But what if this technology were to fall into the wrong hands? If AI was suddenly being used to aid cyberthreats and not eradicate them, could “good AI” take on “bad AI” and win the war on malware?

While there are some barriers in place today to prevent such a thing, there’s no harm in preparing for the inevitable. Here is why we need to turn our attention to the AI vs. AI battle and prevent this potentially catastrophic phenomena:

Outpacing human capability

We are all appreciative of the sheer power of technology, but as realists we need to be aware of its capabilities and impacts. Part of that includes recognizing that AI indeed does have the potential to outpace human development. This has already been demonstrated by IBM at Black Hat 2018 where they displayed the potential impact of a deep learning-based ransomware, Deep Locker. The ransomware successfully compromised a face recognition algorithm to autonomously select which computer to attack with encrypted ransomware.

DeepLocker was able to convert and conceal the trigger of the attack into a “password” that when entered by the unassuming user would unlock the attack payload. Deep Locker demonstrated how the threat could potentially outpace our human capability in finding targets and executing attacks. This kind of attack could be duplicated using audio, location or other algorithms that are built right into most of our everyday devices.

These attacks are triggered by the AI itself, not the developer of the original algorithm or the user inputting a password, indicating that these threats can be triggered in a location far beyond our ability to control. This is why an advanced method of defense needs to be implemented for ultimate protection.

An unidentifiable perpetrator

One of the most puzzling aspects of the majority of cyberattacks is that the actual perpetrator that hits the “send” button is usually protected by anonymity. Rarely does the person behind the attack come face to face with their targets – especially when these attacks are mostly aimed to take down complete organizations or millions of users with one click.

With AI, that psychological distance between perpetrator and victim expands exponentially. It’s important to remember that the technology is merely the methodology, possessing no moral compass whatsoever. What lies behind it, is the perpetrator who merely created an algorithm.

As malicious AI evolves, it will continue to gain knowledge, carefully select its targets and inflict damage in a manner that will outsmart its human counterparts, with enormous difficulty in finding the person to blame and prosecute. This makes it more crucial than ever to develop and implement solutions that can combat malicious AI, in a totally different manner by detecting its vulnerabilities and outsmarting the technology.

The need for deep learning

If AI falls into the wrong hands, there is a possibility that augmented algorithms can be used to hinder the functionality of benign AI algorithms used in traditional machine learning. Thus, the potential to embed the malware into an algorithm and launch an attack becomes a harsh reality. While traditional machine learning contributes to this kind of growth, deep learning serves as an advanced form of artificial intelligence, where its enhanced learning methods that could help break this cycle.

As an advanced subfield of machine learning, deep learning has the ability to beat the malicious AI at its own game by detecting the anomaly and thwarting the attack before the AI-infused malware gets the chance to execute, protecting the entire infrastructure. These superior capabilities of deep learning will continue to improve over time, keeping us well equipped as we enter the AI arms race.

Conclusion

As AI continues to evolve, businesses and individuals alike will continue to harness this technology. However, being fully equipped with an adequate defense against AI-enabled cyber threats will be the key to ensure ongoing security. Regardless of how traditional machine learning manipulates AI to develop malicious tactics, deep learning will prevail due to its combative capabilities, serving as the ultimate tool for protection against future cyberattacks.