Why we can’t put all our trust into AI

According to theoretical physicist Michio Kaku, “The human brain has 100 billion neurons, each neuron connected to 10,000 other neurons. Sitting on your shoulders is the most complicated object in the known universe.” Yet as fast as we can we want Artificial Intelligence (AI) to solve our problems. Across many fields this has the potential to offer considerable benefits, including the world of cybersecurity. However, caution is needed.

ai solve problems

While it was able to master playing chess and Go, when it comes to wider applications it appears to be floundering. Yet investors are pouring money into artificial intelligence, despite clear setbacks in self-driving cars, social media, and healthcare – and this trend shows no sign of slowing down any time soon.

AI is not going to solve your cybersecurity problems, so can we stop pinning our hopes on it? Instead of seeking a “magic box” to solve all our problems, organizations should be looking at how skilled personnel can work with AI to utilize the strengths of each to improve the other.

In its current format, AI is really “statistics at scale”. In other words, large data sets are analyzed, patterns identified, and used to create various models (which in cybersecurity often equate to identifying outliers from a model built around a network, or using prior collected data).

The problem with this approach and cybersecurity are:

  • There is not a lot of historical data on which to build these models. Incident reporting is still patchy and often does not contain all the deep technical info. Most security assessments do not publicly disclose their inner workings, so there is not a large amount of training data which can be used.
  • The number of possible events is vast. Think how many possible company setups there are, applications you can run, people you can work with, economic factors that can impact a company, and so on.
  • There continues to be a huge number of unknowns (bad actors are always doing research into new attacks, for example).
  • In many cases these tools create more noise, which means more work for system administrators. Given how the “AI” works in most cases, an organization cannot simply leave it to its own devices so (in the case of monitoring) alerts have to be investigated. In theory there should be fewer of them, but this requires tuning, which takes time and effort – something people can’t or won’t give.

An additional problem with AI in both cyber and its wider application is that any biases in the design will come through. In the simplest version, right now, hacks continue, so if we base an algorithm on what we know equates to “good security”, is it going to provide that? Or just provide an analysis of what we think is good, even though in the larger context it doesn’t work (since all those hacks keep occurring).

The root of the problem is that cybersecurity is hard. For a hard problem what better solution then a magic box which produces the answers? Unfortunately (or fortunately) people still need to be involved in this. Relying solely on the black box will produce a false sense of security which can have disastrous effects.

The way forward is a combination of humans and AI working together, utilizing their strengths. AI can do a lot of the heavy lifting, repetitive tasks, and spotting flaws in vast amounts of data, but humans are able to narrow down the important issues quickly and act.

We tend to downplay the capabilities of people, but the more research investigates this the more we find how complex our brains are, and all the amazing stuff they can do. Self-driving cars are the classic example. Think of what goes on when driving a car – the motor skills required to steer and work the pedals, and the massive amounts of info being consumed and analyzed quickly by your senses: dashboard info, passenger info, other car info, keeping an eye on the weather, looking at the road, watching behind you, and finally using your instincts to determine when something just “doesn’t feel right”. That instinct has been refined over an eon of evolution and is not easily created by code.

But back to cybersecurity – we need to use AI where it can help, and ideally use it to better utilize the talents of people. Penetration testing is unlikely to be replaced by an entirely automated solution anytime soon (regardless of what any ad may tell you) – the variables are many, time is short, and many vulnerabilities which could be exploited require considerable effort. But we can use AI to help human testers get to the critical issues quickly and work out the stuff that is really going to impact an organization. This will save time, money, and lead to better outcomes.

Humans are, of course, the source of the bias that was mentioned earlier. So how are they going to help with that? One way to overcome this is by engaging people who are not “experts” in cybersecurity, and then supporting this with machine learning or AI.

Intuition, “gut instinct”, and our ability to narrow down a vast range of choices quickly (while not always being correct) still has a role to play. The goal of any effective AI cybersecurity system should be enhancing that, not seeking to replace it.

With the ongoing skills shortage in cybersecurity it is clear that the use of automated solutions is going to play a role. And possibly a big one, but we need to stop looking for a silver bullet that allows us to have a magic box doing all the work and instead utilize the best humans have to offer, along with some amazing technology. This offers us the best chance of winning.

Don't miss