Are we chasing the wrong zero days?

Zero days became part of mainstream security after the world found out that Stuxnet malware was used to inflict physical damage on an Iranian nuclear facility. After the revelation, organization focused efforts on closing unknown pathways into networks and to detecting unidentified cyber weapons and malware. A number of cybersecurity startups have even ridden the “zero day” wave into unicornville.

Stuxnet’s ability to halt operations forced critical infrastructure operators to think about how they could fall victim to cyber weapons. Subsequent attacks believed to be responsible for taking out power grids have certainly raised panic levels. When it comes to critical infrastructure though, unknown digital payloads and unidentified gaps in code may not be the easiest way for attackers to penetrate systems or to inflict damage. There may be an even more dangerous type of “zero day” in play — humans.

Within the critical infrastructure sectors, the human risk factor seem to be going unnoticed. This is evidenced by a series of events that have taken place over the past couple of years. Incidents which have led to stark warnings, unknown outcomes and dire consequences. Since November is National Critical Infrastructure Security and Resilience Month, now is a good time to consider some examples and mitigation steps.

Stark warnings

In a recent report, the Office of the Inspector General concluded that several dams run by the US Bureau of Reclamation are at increased risk from insider threats. This Bureau’s failure to limit system administrator access to industrial control systems (ICS), comply with password policies, and implement sufficient background checks are the key risk factors. It’s worth noting, the Inspector said the dams’ ICSs were at low risk from external cyber threats. While there have been no known reports of consequences since the report published, it continues to serve as a warning for how serious the threat created by humans is within the sector.

Unknown outcomes

Even in cases where malware and other means were used in attacks on critical infrastructure, human unpredictability is a key factor. In March, the US-CERT announced that Russian operatives were engaged in massive, coordinated attacks on critical infrastructure sectors. The DHS and FBI found that the campaign targeted networks with spear phishing and watering hole attacks, among other means.

To date, there have been no public reports of US power grids going dark or other consequences from this attack. We do not know the extent of the information the attackers may have stolen or to what degree they are poised to strike. We do know that by taking advantage of humans, the attackers gained access to systems and information that underpin the US way of life.

Dire consequences

If ever there were an example of how not being prepared for how human actions could lead to a critical infrastructure disaster, it lives in the 2016 US Presidential election. Following wild speculation into the hacks of the Hillary Clinton Campaign, DNC and DCCC, the DOJ investigation showed that Russian operatives fooled either John Podesta or one of his assistants via a phishing email, which resulted in a compromise of his credentials.

With credentials in hand, attackers dug deep into the Democratic Party apparatus. They stole party emails and other information, including emails written by Clinton. Some say the public release of these emails tipped the election in President Trump’s favor. If true, then risky human actions (Podesta or his assistant) may have changed the course of world history.

Fixing the problem

Effective cybersecurity can be achieved through a layered approach. It would seem, based on analysis of the three examples and many others, that the human security layer is lacking within critical infrastructure. There is no way to completely remove insider-driven risk; there are ways to reduce it. To start, anyone responsible for helping to reduce insider risk should have basic controls in place:

1. Privileged access control and monitoring: In the dams situation, failure to limit administrator access to industrial control systems (ICS) was a key risk factor. Effective privileged access management and monitoring technologies help organizations identify privileged accounts and their owners, establish processes and controls to restrict credential sharing, monitor account use, and govern identity across accounts.

2. Training and awareness: To reduce insider threat risk, employee buy-in is critical. Most trusted users aren’t thinking “security first.” This doesn’t have to cause further headaches. Organizations must implement awareness and training programs. Programs should provide information about when and where actual mistakes and willful behaviors are taking place. Studies show that security education can reduce attack susceptibility rates by as much as 70 percent.

3. Early warnings: A number of solutions do notify when suspicious activities are in play. Many “early warnings” turn out to be wild goose chases. Look for alert features powered by technologies that understand context, know when events are normal or anomalies, and what user intent is. When these factors are accounted for, warning signals will have a higher degree of accuracy and reduce the number of false positives in environments.

4. Behavior monitoring: Organizations that know how insiders are behaving within their digital environment will have insights into how and why systems are being accessed, how data is being shared, and when risky behaviors and activities are taking place. Look for tools that can be quickly deployed and managed, scale across large environments, monitor behaviors taking place on and off networks.

Don't miss