Be a “dumbass”, like some of the world’s best cyber investigators
One of my closest friends in the cybersecurity industry has had a second-to-none career path. While in the employ of an industry leader in incident response, he was consistently their busiest forensic investigator, spearheading some of their most notorious cases.
He was then part of helping a leading EDR company launch its own forensics services. And if that wasn’t enough, he went on to create (from scratch) a sophisticated insider threat detection program for a large technology vendor in Silicon Valley. He is now sought after by law firms and VCs alike to consult and present his unique insights.
While chatting over drinks one day, I asked him: “Why are you one of the most successful and influential cyber investigators I know? What is it about you that separates you from other investigators?”
His answer was immediate: “Gary, I’m the biggest dumbass I know.”
He laughed heartily, while I chuckled uncomfortably and probably looked confused (if not dismayed).
In his explanation, he highlighted how frequently he receives threat reports and investigations from more junior investigators that explain in technical detail how a given function or process was exploited by an attacker to achieve their goals. He typically then asks questions like:
- How do you know it happened that way?
- Are you sure this function can be harnessed for that purpose?
- Where did you get the information to validate this assertion?
He went on to tell me he will often discover (with enough pressure and digging) that the investigator who wrote the report may have made a number of “educated guesses” to fill in technical gaps.
He credits much of his success to one simple point: do not assume you know how something happened, even if you are pretty sure. Be a “dumbass.” Validate everything. Yes, this can be difficult, but the resulting quality of the work will frequently be head-and-shoulders above your peers.
He is indeed a sage, as the concept he shared has long been familiar in Eastern traditions—the “beginner’s mind.” A quote by Shunryu Suzuki captures beginner’s mind well: “In the beginner’s mind there are many possibilities, but in the expert’s, there are few.”
Before you can explore the possibilities that could exist with the threat you are examining, you need to be open to those possibilities in the first place.
In my previous Help Net Security articles (1, 2), I shared advice for aspiring threat hunters, investigators and researcers about how to start a career and stand out in front of potential employers by researching threats and writing about them. Being a “dumbass” – as crazy as it may sound – is my second big piece of advice and it will help make your research and writing even more valuable.
My third piece of advice might sound like something that comes from a fortune cookie, but the insights driving it are basically unknown throughout the security industry. In fact, this could be one of the first articles to publicly discuss this aspect of being an infosec “defender.”
My advice is: Don’t work with data. Play with data.
Getting inside the minds of the best researchers and threat hunters
During the first two years building Awake Security, we spent thousands of person-hours embedded in SOCs, sitting shoulder-to-shoulder with Tier 1, 2, and 3 analysts. Our investigation began with manually instrumenting every click an analyst made so we could analyze patterns and gain valuable insight into the needs of our users before beginning development on our platform. The depth of this investigation was a truly unique experience that I have never seen or heard of being executed at this level or with this rigor.
Our investigation evolved into a cognitive and psychological profiling of what motivated every click an analyst made. This investigation was conducted by specialists in the field of cognition and education. Technically speaking, this field of study is known as Cognitive Work Analysis (CWA), which is used to “model complex sociotechnical work systems.” As described on Wikipedia:
Cognitive Work Analysis can be used to describe the constraints imposed by a system, its functional properties, the nature of the activities that are conducted, the roles of the different actors, and their cognitive skills and strategies. The different tools within the CWA framework have been used for a plethora of different purposes, including system modelling, system design, process design, training needs analysis, training design & evaluation, interface design and evaluation, information requirements specification, tender evaluation, team design, and error management training design. Despite its origin within the nuclear power domain, the CWA applications referred to above have taken place in a wide range of different domains, including naval, military, aviation, driving, and health care domains.
The work of Awake’s CWA researchers was backed by a team of engineers from Stanford, Harvard, Carnegie Mellon, Oxford, UC Berkeley, and other distinguished universities. The team also notably included hackers and investigators with decades of experience. This research is unprecedented, as we have found no similar work in the public domain. It provided us a rare and unique understanding for why analysts take the actions they take. What we found was beyond what any of us expected and challenged many of our preconceived notions about how to build the ideal threat detection and hunting platform.
For our research, we did not simply analyze the cognitive characteristics that separate people in information security from other branches of technology, but also the characteristics that separate security analysts from other subdisciplines in security. It’s not surprising to learn that the highest performing analysts have much higher-than-average scores in inductive reasoning, as this is the ability to combine pieces of seemingly unrelated events to identify patterns.
As a side note, and perhaps just as interestingly, we also found the people most successfully engaged in offensive security disciplines (hacking) tend to require much higher degrees of deductive reasoning traits.
But our research took a sharp and unexpected turn when we encountered the extreme reliance analysts – and specifically analysts – have on a mental characteristic called “Flexibility of Closure” to make accurate decisions in the first place. Flexibility of closure is broadly defined as the ability to identify important patterns (or behaviors) in data when those patterns are surrounded with distracting data.
One way to understand this is to understand “closure” as “ceasing to continue looking for importance” when examining a set of data (which is also the key decision an analyst is making before they decide to initiate the remediation process for a possibly compromised device).
To have a higher degree of “flexibility” before “closure” is to say that you will continue mentally extracting possibly important information from a set of data – long after most other people would stop seeking patterns in the same data.
The implications of all this are massively profound on security product UI design (and shows why almost all major products are truly poorly designed to support analyst cognitive requirements), but this understanding is also profound in how you should approach your work. To describe someone who is exercising a high degree of flexibility of closure is almost like watching someone who is engaged childishly, naively, or possibly even stubbornly in that task. However, it is not stubbornness or naivety – rather, it is play. Other descriptions for this could be “flow” (in some regards).
I describe watching someone with a high degree of flexibility of closure as “creativity,” where creativity can be described as “an absence of self-criticism or other restraint that prevents useful exploration of ideas and possibilities.” Remember that “closure” in the term flexibility of closure means “ceasing to continue looking for importance” when examining a set of data, and that “ceasing” is typically predicated by negative judgements of your own processes or confidence.
In other words, play. Explore. Perhaps even naively, connecting dots – even if they do not totally make sense. And when you’re playing with data, it’s as good a time as any to continue being a dumbass – take the time to ask and explore questions even when you think you already know the answer. Doing so may surface patterns that all others missed by “quitting” too soon.
Finding threat patterns that the rest of the industry missed will be extremely fruitful for your career prospects.