The human brain is a fantastic machine, but we’re all subject to cognitive bias and reasoning errors – and cybersecurity pros are no exception.
In a newly released report, Dr Margaret Cunningham, psychologist and Principal Research Scientist at Forcepoint, examined six universal unconscious human biases, how they can influence cybesecurity decision making, and urges infosec pros and leaders to make an effort to overcome them.
Inconvenient cognitive biases
Our days are filled with decision making: when should we wake up, what will we eat, what will we wear, what tasks we need to complete and how, should we meet with friends, and so on and so forth. Most of these decisions are made on “autopilot” – quickly, without thinking much about it, and based on our own preferences, intuition, or past experiences.
But some decisions require more effort to gain a clear look at the entire situation, to overcome unconscious biases, and to perform more analytical and logical reasoning.
“In cybersecurity, understanding and overcoming security-related perceptual and decision-making biases is critical, as biases impact resource allocation and threat analysis,” Dr Cunningham explained.
“Building awareness of cognitive biases can help us move beyond biased decision making, and more importantly, help us avoid designing systems that perpetuate our own biases in technology.”
The biases that can influence our decision making are:
- Aggregate bias (inferring something about an individual using data that describes trends for the broader population)
- Anchoring bias (locking onto a specific feature or set of features of information early in the decision-making process)
- Availability bias (frequently hearing about specific events may impact how humans perceive how likely an event is to occur)
- Confirmation bias (people may decide what happened before investigating, and only look for data that supports that theory)
- The framing effect (How choices are worded/framed may influence decisions)
- Fundamental attribution error (People see other people’s failures or mistakes as part of their identity rather than attributing them to contextual or environmental influences)
These could lead to analysts focusing on the wrong individual as the source of a breach, making wrong estimates about the potential impact of a threat, focusing on unlikely threats and misallocation of resources, coming to wrong conclusions or spending too much time on incorrect theories, and so on.
No one is immune to these biases, but they can be overcome if we are ready to see and admit they play a part in our thinking process.
Some of the biases can be addressed through the use of improved advanced analytics, while others can’t be addressed by technology.
“One bias that requires human effort is overcoming the impact of the fundamental attribution error. While organizations can raise awareness of this phenomenon, individuals within an organization must take on the responsibility for challenging their own assumptions about themselves and about others,” Dr Cunningham noted.
She advises security professionals and business leaders to take a moment and think about whether they are guilty of letting these biases influence their decision making.
They should ask themselves and colleagues whether they make assumptions about individuals based on group characteristics, whether their company’s perception of current risks is regularly swayed by the news cycle, and whether they let themselves get hung up on a forensic detail preventing them from identifying a new strategy for exploration.
“When you run into the same problem, over and over again, do you slow down to think about other possible solutions or answers? When offered new services and products, do you assess the risk (and your risk tolerance) in a balanced way? From multiple perspectives? And finally, does your team take steps to recognize your own responsibility for errors or for engaging in risky behaviors, and give credit to others who may have made an error due to environmental factors?”