Although the acronym is close to a century old, FUD (Fear, Uncertainty and Doubt) has come to be closely associated with the technology industry since the 1970s.
FUD is a simple but effective strategy that relies on supplying the audience with negative information to influence their decisions, and it’s easy to see why it’s become so prevalent in the world of cybersecurity, with the ever-present threat of another major attack.
Security vendors themselves obviously have a vested interest in having potential buyers worried about the risks of an imminent cyberattacks, as this fear will sway their decision to invest in more security solutions.
Likewise, media coverage of cyber incidents usually facilitates these doom-laden warnings, with a particular focus on the cost of attacks and the likelihood of further incidents. Again, this is unsurprising, as negative headlines have always been known to shift more copies, or more recently earn more clicks. Thanks to the increasing number of incidents impacting well-known brands or public infrastructure, we have seen this approach increasingly played out in the mainstream media.
Even non-commercial efforts by governmental bodies and not-for-profit organisations around security tend to lean towards FUD as a way of getting individuals and enterprises to take the issue seriously. Much of the discussion on the upcoming GDPR, for example, has focused on the risk of huge new punitive fines, rather than more positive messages.
What harm does FUD do?
In small doses, FUD can indeed be quite useful in gaining attention and spurring action. However, I encounter a lot of hyperbole around cyber, with terms like “hurricane force” and “weapons of mass (cyber) destruction” being thrown around, along with a focus on unlikely doomsday scenarios.
This does not help people take action, but rather pushes them in one of two counterproductive directions. It’s possible all the doom saying will shake some cash lose from the organisation, which is unlikely to go to the right places, and will instead be wasted on whatever the new technology of the moment is. Alternatively, the hyperbole can simply inure people to the real risk, resulting in no action at all. Those who cry FUD are like the boy-who-cried-wolf. It winds up hurting the CISO as a voice of IT Risk by making them seem unbalanced and fearful of disaster to the point that they can’t have an adult conversation.
It is of course very true that serious cyberattacks will continue to happen, and there are many threat actors out there who can employ advanced tools to devastating effect. It’s also true that many organisations are not paying enough attention to key security issues such as single points of failure and resiliency. However, the right way to mobilise decision makers is not through exaggeration and prophecies of doom.
What should the industry be focusing on?
We need to stop fetishizing FUD and instead step up a meaningful dialogue around the most likely risks and how we can practically address them. Whether we’re taking about a major attack on national infrastructure or an attack on a specific enterprise, the focus needs to be on ensuring confidentiality, integrity and availability of our systems and data.
Central to this is addressing the single points of failure (SPOF) – the elements of a system that will cause the entire system to stop working if anything happens to them. The priority for all organisations should be to identity any SPOF within their operations and eliminate them by building in redundancies and other measures. On a national scale this means ensuring that critical services cannot easily be knocked out by a single attack.
We saw a classic case of this with the WannaCry attack disabling a large number NHS hospitals because there were no backup plans to get around systems being locked by the ransomware. Private enterprises likewise need to ensure a high level of resilience that will enable critical business processes to continue in the event of attack. Resilience is also something critical – being able to bounce back quickly with as little interruption to availability as possible.
Linked to this is the principle of least privilege, which holds that every element of a system – from applications to users themselves – should only be able to access information and resources necessary for their role. When a compromise occurs, least privilege means the attacker will find it much more difficult to escalate their attack and spread to other systems.
The growing number of threat actors and proliferation of new tools means it is inevitable that system infrastructure will be breached at some point – but the breach of information itself can and should be avoided with more attention and focus. It’s gotten so bad that tools are chosen simply on the basis of their ability to find things without regard to the negative impact on business. People looking for the best mouse trap for their houses are deploying tools so coarse that they often maim and kill the children in our IT environments (users, systems and business processes).
While the continued reliance on FUD may seem to be a useful sales tactic for the solution of the day in the short term, in the long term it is damaging the credibility of the security industry and causing decision makers to throw cash in the wrong direction, or simply ignore the threat entirely. Instead of scaremongering, we need to help steer organisations towards essential security processes that will ensure confidentiality, integrity and availability even if a doomsday scenario does occur.