AI is already supporting businesses with tasks ranging from determining marketing strategies, to driverless cars, to providing personalized film and music recommendations. And its use is expected to grow even further in the coming years. In fact, IDC found that spending on cognitive and AI systems will reach $77.6 billion in 2022, more than three times the $24.0 billion forecast for 2018.
But the question remains – can businesses expect AI adoption to effectively protect them from cyber threats?
Entry points and malware
The Internet of Things (IoT) means everyday objects are generating more traffic, collecting more data and opening up more entry points for attack than ever before. This, alongside more integrated networks, results in cybercriminals having a plethora of entry points for bringing down an organization. More IoT devices means a greater likelihood of unpatched devices on the network. Unpatched individual devices on the network mean that the whole system is potentially vulnerable.
The truth is that as businesses grow smarter with AI, so do their attackers. Already, malware can infiltrate a system, collect and transmit data, and remain undetected for days. But with AI, an attack is augmented with the ability to adapt and learn how to improve its effectiveness with every moment it goes undetected.
The problems with AI and cybersecurity
It’s worth noting that AI refers to the broad concept of machines being able to mimic human cognitive functions. It can detect patterns, spot anomalies, classify data and group information. Machine learning, on the other hand, can be seen as an embodiment of AI – when machines are given enough data, they can use it to solve problems and make decisions by themselves.
In an ideal world, AI and machine learning would be able to spot and shut down an attack before humans need to do anything. After all, it has the ability to detect anomalous behavior and deter security intrusions on a round-the-clock basis.
However, this isn’t always possible. Algorithms are only as good as the humans that designed them, and any decision-making that is automated needs a significant amount of pre-planning and manual analysis of data. Furthermore, machine learning requires feedback when determining what is ‘good’ or ‘bad’.
In turn, this creates problems, as malicious attacks can be designed to appear unthreatening from the outset and slip past an AI’s algorithms. Trying to confuse and subvert the defensive algorithms of an AI is known in the research space as “adversarial machine learning” and is a particularly hard challenge to overcome. Getting machine learning right is difficult by itself: real-life data is messy, noisy and attacks are relatively infrequent. All of these factors limit the amount of useful data available for training and sharpening AI.
Given its flaws, AI should not be considered as an adequate replacement for human surveillance — at least not in the immediate future. Every technology has limits and human knowledge and intuition will remain vital to understanding how to react to a threat, and the depth of the issue at hand. Furthermore, not all attacks are sophisticated AI-based hacks. There are a variety of human threats, which must be counteracted, and as such, it still takes a human to recognize certain behavioral patterns.
A hybrid approach, where only specific processes are automated and the rest remains the responsibility of humans, is the most logical option. However, it’s not all bad. AI can share some of the burden of surveillance and take away several mundane chores from human hands, freeing them up for decision-making and specific pattern recognition.
CIOs need to ask the right questions when it comes to ensuring they don’t get swept up amongst the AI hype. Any security solution claiming absolute protection should be treated with caution. While the potential for security to become more proactive than reactive is there, a common sense, dual approach is needed. At this moment, human expertise along with AI technology can achieve better results than either one alone.