Endpoint breach prevention by reducing attack surfaces
Here’s a transcript of the podcast for your convenience.
Welcome to this Help Net Security podcast. I’m Chris Carlson, vice president of product management at Qualys, and today we’ll talk about endpoint breach prevention by reducing attack surfaces. And that really is a key thing around how you do your security controls and your security programs, that a computing asset, an operating system, the web application has a certain surface area, and that surface area can be exploited by attackers and adversaries. And how do you reduce that surface area, or the attack surface, make it less easy to exploit and breach? And that really is a change of what’s happened in the past 10 years, maybe even five years.
So, in the early 2010s, 2011, there was a lot of focus on zero day vulnerabilities. A lot of the nation state cast off, a lot of big breaches that came out, it was zero day vulnerabilities, it was zero day exploits, it was the unknown unknowns. So, how do you defend against something that you don’t know what’s there? And that really drove a lot of the breaches. But since then, the modus operandi of adversaries have switched. Some of them are still using zero days, but it’s a lot easier and more economical, and actually more profitable for adversaries, especially organized crime for financial gain, to focus it on rapidly weaponizing newly disclosed vulnerabilities.
What does that mean? Rapidly weaponizing means, that as vulnerabilities are disclosed and the vendor comes out with the patch, the ability to create and exploit from that vulnerability, weaponize it for some type of adversarial or attack campaign, before the enterprise has the time and the ability to close that vulnerability by patching it.
And I think the biggest thing that really hit us, that really had the world step up was WannaCry in the first part of 2017. From a data point of view, the known critical vulnerabilities have been increasing every year since 2010. So, in any given year there are six to seven thousand vulnerabilities disclosed per year, and about 30 to 40 percent of them are high or critical. The high or critical severity is the ability for them to be exploited remotely, to actually get privilege escalation, possibly execute remote code, and that will let you take control of that machine. And when you are able to control that machine, you can implant command and control data, you can change the operating system, you can add a local user account, and that’s how you can take that control.
So, with six to seven thousand new vulnerabilities per year, and 30 to 40 percent of them are high or critical severity. Now you’re looking at maybe around 2,500 or 3,000 vulnerabilities every year that are easily exploited, and that is where adversaries are now focusing their time. And for a couple reasons. One, like I mentioned it’s easy to do, so it’s very easy to weaponize it, but they’re counting on the fact that the enterprise operational process to identify and remediate the vulnerabilities, frankly, is slow. They are counting on the fact that there is internal inertia with IT teams. It’s IT teams not wanting to risk business continuity issues by patching a SQL server or patching Java. That really increases the time of window that vulnerabilities can be exploited by targeted attack time frames. And that really was a point of indication last year, the first part of 2017 around WannaCry.
Probably you heard about WannaCry a lot. We have some data here at Qualys based on opt-in anonymized data on discovery these vulnerabilities, and the remediation lifecycle and timeframe to patch those. And what’s interesting is that maybe you all realized where you were on March 14 2017 when Microsoft disclosed the vulnerability. Maybe you don’t. It was just a normal Microsoft vulnerability disclosure, one of many, no high severity, but one month later was the Eternal Blue exploit nation state weaponized exploit discovered, and exposed in the wild. What we saw at Qualys with our customers, are the customers that had a mature remediation program to close vulnerabilities, which reduces the attack surface, was able to burn down those vulnerabilities in a normal month release cycle. That’s the normal monthly cycle that mature companies go through. So, from March 14, disclosure of the vulnearbility from Microsoft, to April 13th when Eternal Blue came out, many of these mature companies with these operational remediation processes, closed those vulnerabilities.
When Eternal Blue came out, Qualys was able to detect and create new detection methods to detect these vulnerabilities without authentication, without agents, and we saw the increase of vulnerable detections go up 2X. But what’s interesting is the burn down of the vulnerability did not decrease at the same rate that mature companies. So, we help cut companies detect more vulnerable systems, but they have burned them down.
And then of course we all remember where we were on May 12th when WannaCry came out laterally spreading ransomware, attack the user machine. A user came into the environment, spread laterally across the environment, and that’s really where we saw a huge focus on emergency patching. Emergency patching really burned down those vulnerabilities, and that closed the window. Now there was the want WannaCry kill switch. There might be next generation security vendors saying they use artificial intelligence, or some magic smoke signals to actually find new variants, but at the end of the day, when you weaponize exploits against known vulnerabilities, and you patch those vulnerabilities, that is the cheapest and most efficient form of security prevention, or also security hygiene.
The first part of this year, in 2018, was Meltdown and Spectre. Meltdown and Spectre as vulnerabilities in the Intel and ARM CPU chips to actually execute remote code, read memory registers. There wasn’t a lot of proof of concepts, or a lot of weaponized attacks, maybe there have been not yet disclosed, but two weeks ago in July 26 researchers created a proof of concept that’s called Netspectre, which is to remotely exploit the vulnerability of Spectre, and read arbitrary memory from a system remotely over the wire.
So, now there is proof of exploit code out there. Now there is maybe an opportunity or chance to see a lot more weaponized exploits focusing on that. So, what we are counselling our customers for at Qualys is how can you do employee breach prevention, not relying on prevention technologies per se, that may be turned off or may not have signatures, may have a certain machine learning model that doesn’t really align to the next attack vector, but by reducing that attack surface. In order to reduce attack surface you must do four things.
Number one, you must have a full asset inventory of all your systems. You must know what your servers are, where your laptops are, where your cloud computing instances are, your print servers are, your IoT devices. You can’t secure what you don’t have visibility into. The ability to have asset inventory of what those systems are. Then you layer on the ability to do vulnerably management on top of that. Detect the vulnerabilities in those systems, both remotely exploiting vulnerabilities, both vulnerabilities on the device itself have a severity of the vulnerability, whether it’s high severity or low severity. And then point number three is how can you help easily prioritize what to remediate and when.
You yourself must understand your organizational hierarchy. Is this an important server, is this part of the ERP system, is this my customer data, is this an external facing web server, is this in my DMZ, is this my source controller, my GitHub system? You must know your own organizational hierarchy of importance and business criticality on top of that. But we have the ability to layer on threat feed and threat intelligence to know what vulnerabilities are being exploited, and that further reduces the ability to target and remediate those vulnerabilities that are being actively exploit in the wild.
That’s the threat intelligence that is really unique and different in the industry. A lot of intelligence is focused on “here’s a hash, here is a domain name that’s serving malware”, but to know what vulnerabilities are being exploited, are there public exploits in the wild? Is there proof of concept code? Is it easily exploited? Is it part of an exploit kit? That really then narrows down to ultimately the most narrow set of assets that you must remediate. And how do you remediate those? Ultimately via patching – either patching configuration change, or in some cases removing the software.
We have a lot of customers that when they realize that they have vulnerabilities in Java, Oracle Java on their systems, and they evaluate the last time that Java has executed on those systems. Java has never run on those systems. They are a default load, they were put on there. So, another way to reduce the attack surface is not spend the time to patch it, but understand that the application is even being used, and just removing the application is another way to reduce the attack surface for that.
Now, in reality you cannot always lower the attack surface to a point that you have no attacks, or no exploits, or no breaches. So, in those cases where a breach does occur, and it is successful, how are you able to identify attacker or adversary behavior in your environment before they have a chance to move laterally, identify critical assets, investigate how to exfil data or prepare for exfil, that’s the kill stage for that. That’s really the ability for you to turn the tables on the attacker, and that is using adversary tactics against them for yourself.
So, for example, if you’re looking at prevention technology from a security control, that prevention technology must be 100 percent accurate, 100 percent of the time, in order to prevent all of these breaches, or these exploits, or these weaponized cases. For those cases where you’re not able to patch the vulnerability, or remediate by removing the vulnerable software, or by having that security control in place, to turn the tables on the attacker is to implement a detection and response capability. By implementing a detection and response capability, that turns the table on the attacker, so they have to be 100 percent hidden. They have to be 100 percent evasive all the time to avoid detection by the IT teams and the security teams, and that is very hard to do.
The amount of digital exhaust that is given off in the environment: user login records, remote scanning records, user log records that have been deleted on the system, local user accounts created, outbound command and control. There’s a lot of digital exhaust that’s being created by adversaries trying to stealthy operate around the environment. So, if you’re able to record this telemetry from these systems, and put them to a platform that can analyze, then you can detect adversarial activity, techniques, tactics and procedures, ideally before they are able to compromise your key data, before they’re able to stage exfiltration, and before they can do that.
There’s many types of technologies that are able to do that. One is an endpoint detection and response capability, one is a file integrity monitoring, another is a passive network sensing detect traffic on the environment. But each individual application creates silos of its own. So, best practices for enterprise is to understand how can you take telemetry from the agent, telemetry from the endpoint, telemetry from the network, and unify into a common platform, so you get a view of all of the event activity that’s happening, matrix against the vulnerability posture of the asset, and the secure configuration posture of the asset.
Then, from an incident response and security operations, now you can get a single view of that asset in order to identify if one, that asset is secure, two, that asset has been compromised, and three if that assets is being used as a staging area to move laterally inside the network. So, there is a lot of areas to really focus on how we can do endpoint breach prevention by reducing the attack surface. So, number one focused on asset inventory – know where all the global lT assets are, identify them, categorize them, do a vulnerability assessment on top of them to know what systems have vulnerabilities in the operating system and applications, in unused applications.
Prioritize the ability to remediate those vulnerabilities based on the likelihood or their current exploits in the wild. And then lastly, to have an operational process that security can work hand in hand with IT to identify and quickly patch those vulnerabilities and/or remove infected software from the operational environment. But that is really the reducing the attack surface, at the same time the ability to have a platform to record endpoint telemetry, to record network telemetry, and identify adversary activity that has compromised a vulnerability or an adversary operating inside environment, so you can react and respond before there is a breach or a loss of data inside the environment.
Sounds complicated, it sounds like you need to get 17 different products. There’s a lot of vendors out there that have a common integrated platform, single set of sensors, a single agent that can normalize all this data into a single console to make better use of your time from a security program, a security control and incident response point of view, and security operations, so you can turn the tables on the attackers, get more advantage from your limited defender resources, and overall improve the security posture and lower the risk posture of your enterprise.