How to remove human error from the cyber risk equation

In attempting to fortify the enterprise’s cyber assets, we have turned much of our attention to human error. After all, the vast majority of hackers rely upon their exploitation of employees to break through corporate defenses, anticipating that these employees will fail to “see” a threat that is hidden inside a seemingly harmless web link, email or on-screen message.

Organizational leaders are aware of this, and are growing increasingly concerned – for good reason. At companies which have suffered a breach, nearly one-half of all C-suite executives have cited human error or accidental loss on the part of their employees or insiders as the cause, according to research from Shred-it.

In addressing this, some businesses seek to upskill their employees, in the hope that they’ll develop “high cybersecurity diligence.” But such initiatives prove futile. We can certainly educate employees about basic “common sense” security practices. But anything beyond this is likely to fall well short of the goal, and holding end-users accountable for breaches is irresponsible.

Gartner predicts that by 2020, 60 percent of digital businesses will suffer major service failures due to the inability of their IT security teams to respond effectively to digital risk. The projection brings up a rather obvious but troubling question: if even security teams will be outsmarted by the increasing sophistication of cyber threats, how can companies possibly expect their non-security professionals to keep up?

So do we resign ourselves to a bleak reality that a major cyber assault is a “when not if” proposition? Or do we explore other paths to enhance protection?

I’d recommend the latter, because there are available and proven techniques/practices and tools which will defend operating environments while significantly lowering (if not eliminating outright) the risks of human error. Here are two of them:

Web isolation

Also known as remote browsing, web isolation sets up users in an isolated environment in which they can browse potentially risky websites. If malware compromises the environment, the damage is minimized because the isolation ensures that the malware will have no access to sensitive systems or data, and that the user’s actual endpoint remains secure. In concept, web isolation builds upon the sandboxing already provided by the user’s browser – but if executed right, establishes a vastly higher level of security.

To respond to malware-based phishing attacks with web isolation, security teams can typically pursue one of two, following options:

Inbound mail gateway: Mail gateways often provide link-rewriting capabilities to determine how the link should be handled. With some web isolation solutions, it is possible to rewrite links in inbound emails so that, when clicked, the link will open using web isolation. Many mail gateways allow whitelisting of senders or domains if there are known and trusted links that should always be opened using the native endpoint browser.

Proxy or secured web gateway: Of course, a mail gateway will only rewrite links in emails. Attackers will exploit other channels such as chat or file transfer to send phishing links to a user. To address the risk, consider deploying a solution based upon the proxy (or other secured web gateway). In this case, teams build a URL whitelist into the proxy so that any non-whitelisted domain is accessible strictly via web isolation.

Methods for defining this whitelist can vary. A number of organizations are enforcing extremely tight whitelists limited solely to known and trusted cloud services which are able to provide security documentation and audit reports.

An alternative, more flexible, approach involves documenting all the domains that users have visited in the last month or two, and using this to create a whitelist. Even for the largest organizations, this whitelist is likely to cover less than 1 percent of the web. But it will probably cover more than 99 percent of the browsing that takes place in the next month.

Given this, users will hardly notice the difference – more than 99 percent of their browsing will proceed exactly as before. But it is almost guaranteed that any phishing link will not be on the whitelist and, hence, will end up in web isolation if opened.

Hardsec

While the web isolation idea is all very well, if the web isolation solution itself can be compromised, then it will be useless. To avoid that risk, the solution is to use a hardsec-based approach.

We have benefitted for decades with the amazing flexibility of software running on a CPU. But that same flexibility is the Achilles heel of today’s IT environment – infeasibly complex systems where simple bugs can lead to vulnerabilities with unbounded impact. The very power that allows us to develop any imaginable functionality simply by supplying the right instructions is also the power that allows attackers who spot an unexpected behavior quirk to substitute their instructions and subvert the entire function of the computing platform.

This is also the weak point for all security software, with the result that well-intentioned protection tools may actually be leaving enterprises more exposed. Indeed, numerous endpoint security products have suffered from very serious vulnerabilities such that not having them in the first place would have been safer.

Hardsec has emerged as a viable, alternative architecture. Originating from the UK government security community about a decade ago and evolving ever since, hardsec uses hardware to tackle the growing challenge of cybersecurity as opposed to the usual software and monitoring-based approach. Instead of CPUs, it deploys Field Programmable Gate Array (FPGA) integrated circuits which can only be programmed using specific physical FPGA pins. This enables security teams to restrict – by physical hardware design and implementation – who can reprogram the FPGA to those who have access to a well-protected privileged management environment. Attackers are kept from doing so because they cannot physically transmit data to the pins.

In contrast to complex and flexible software-based tools that give cyber adversaries abundant opportunities to exploit, hardsec controls are comparatively simplistic and narrow. They are, as we like to say, “too dumb to hack.”

With hardsec, we achieve the best of both worlds: the strength of hardware security, which is removed from traditional, Turing machine computer logic and, therefore, doesn’t suffer the vulnerability of software; and the flexibility of software that allows common base hardware platforms to play multiple roles depending on how they are programmed.

Share this
You are reading
identity theft

How to remove human error from the cyber risk equation