When AI and security automation become foolish and dangerous

AI security automation dangerousThere is a looming fear across all industries that jobs are at risk to artificial intelligence (AI), which can perform those same jobs better and faster than humans. A recent Forrester report predicts automation will replace 17 percent of U.S. jobs by 2027, only partly offset by the 10 percent growth in new jobs predicted to result from the automation economy.

As a vendor, the idea of automation is highly alluring. Automation is technology. Technology provides significant and material financial incentives over its unpredictable and fallible human counterparts. Perhaps most tellingly, automation is a key component of most vendors’ ROI stories, meaning it’s a powerful tool in the “buy our product and we will save you money” toolbox.

But should organizations really be sprinting headlong into automation? There is no question that automation delivers significant value to organizations. Repetitive and boring tasks waste valuable time and result in unhappy and unengaged employees. Similarly, other types of tasks, specifically those required to analyze large data sets, are better performed by computers. If automating these types of analytical tasks provides business value, organizations should certainly examine those options.

Implementing some automated solutions can prove valuable. However, when it comes to network security, fully automating the tasks of a security analyst can be a dangerous and foolish decision for a variety of reasons.

1. Cybersecurity threats are not software – they’re creative humans with latitude

Attackers are both intrinsically and extrinsically motivated to bypass controls, whether those controls are automated or otherwise. For attackers, breaking into your network is not only lucrative, it’s fun. That point is important enough it bears repeating: Hacking is fun.

As a defender, combating this creativity with automation is likely impossible, as the effectiveness of automation is predicated on the components of the system being automated. The failures in the components of the system become failures of the automated system itself. Motivated attackers have, and will continue to bypass these automated controls, patiently waiting and looking for flaws in automated defenses and the processes built on top. Why? Because when people are engaged in something fun, they are motivated by the pursuit itself and will work until they succeed.

2. Automating too many things makes hiring harder – not easier

If a process to be automated is not well understood, it can’t be automated. So, the tasks that tend to be automated are the simple ones, which leaves the more complex ones left over. This is a principle known in the study of automation as the “Left-over Principle.”

This poses a new problem for organizations implementing automation. Hiring people with appropriate skill sets to accomplish even the simple tasks of a security analyst is already a challenge. According to cyber security employment data tool CyberSeek, every year in the U.S., 40,000 jobs for information security analysts go unfilled. In most enterprises, the average turnover rate for security analysts is less than 2 years – a problem that is exacerbated by the current cybersecurity skills gap.

Think you’re having trouble finding people to do the simple tasks well? Wait until you mostly only have difficult and complex tasks left. Because there is no thoroughly effective cybersecurity preventive/detection/control system, automation doesn’t eliminate the need for humans – but creates a new requirement for a human auditor and manager of the automated system itself. Additionally, you still need to find analysts specialized in the more complex and experience-dependent investigative tasks leftover by automation, but automation can create problems here too, as highlighted next.

3. Automation can widen the cybersecurity skills gap

As mentioned above, the complicated leftover tasks require highly specialized skills from a small pool of qualified applicants that are difficult to attract. These leftover tasks require almost turbulently flexible judgment and creative application of different analytic methods.

Frequently engaging in simple tasks is proven to keep one’s skill and perspectives more broadly relevant. It also makes you more well equipped for evolutions in the rarer and more complex tasks built upon simpler building blocks. For example, studies of novice versus experienced professionals across industries frequently show it’s not the difference in knowledge, but the difference in repeated and broad experience between the two groups that form the basis of improved decisions in experienced professionals (aka: procedural knowledge). In other words, performing simpler tasks makes you better at harder tasks.

Why is automation getting so much attention throughout the security industry?

For starters, there is currently significant investment in automation, independent of the thoughtfulness applied to the benefits and challenges it presents.

However, the concept is gaining material traction in enterprises for a reason: there is real pain and material risk being experienced by enterprises from the deluge of information currently blasted at security teams.

Automation does indeed help solve certain aspects of this problem in enterprises. Repetitive and boring tasks, highly error-prone tasks, data entry, and identifying patterns in large amounts of data that are not currently automated are all examples of the types of automation that make teams more effective. However, like previous fads in the security industry, automation has its share of benefits and challenges, and those challenges can have a net-negative effect (and a new cost center) if not well-understood.

As long as there is a creative human attacker, automation should not be about replacing people. It’s about allowing people to reclaim the ability to do the functions they do better than machines, more efficiently and effectively. This is why you have security analysts in the first place: to discern the legitimacy of behavior, infer intent, and respond appropriately.

Don't miss