In this podcast recorded at Black Hat USA 2018, Tim White, Director of Product Management, Policy Compliance at Qualys, talks about the importance of incorporating inaccessible or sensitive asset data into your overall vulnerability and compliance program.
Here’s a transcript of the podcast for your convenience.
Hello, my name is Tim White. I’m director of product management for compliance at Qualys, and today I’m going to talk to you a little bit about the importance of incorporating inaccessible or sensitive asset data into your overall vulnerability and compliance program.
It’s truly important to understand how visibility impacts your effectiveness to implement a security program. Most organizations today are doing a great job at vulnerability management. They’ve extended their vulnerability programs to leverage threat intelligence information to focus their remediation efforts on the issues that can have the biggest impact on risk reduction. And in the past podcasts, I’ve talked a little bit about the importance of configuration assessment to securing systems, which is also a critical task because vulnerabilities, systems that are vulnerable by configuration are a significant risk exposure for most organizations because it’s an area where assessment has not always been done very deeply, at least not for systems that are not related to compliance or regulated parts of the environment.
And some organizations are beginning to do a better job at this. They’re expanding their configuration assessment programs and their compliance programs to assess critical systems that touch their critical environments. They’re looking at system administrator machines, critical desktops, executive systems, etcetera. And so that’s a great thing, we’ve seen some expansion in those areas.
There are still assets in the organization that are not accessible today. I’ll talk about some examples in a moment, but there’s no critical systems that you can’t directly connect to that are too important to the operation to run scans against, and a variety of other reasons why some of your most sensitive, most security critical systems are not necessarily being reported on in your overall compliance governance and risk programs.
The impact of these gaps and visibility to your security program include a variety of different impacts. First and foremost you have to know what you’re securing in order to secure it, and an incomplete asset inventory, even if you have those systems recorded somewhere else in a physical system, not having them unified in a centralized environment for your compliance reporting can lead to people not realizing that those systems are there, and not necessarily analyzing them on a regular basis as they should.
You can also increase your exposure to unknown vulnerabilities and misconfigurations. These systems typically are highly locked down systems that you may not normally be assessing. However, they still become vulnerable. New vulnerabilities get discovered every day, and not having a complete inventory and complete scan of these systems can result in those vulnerabilities perhaps lying dormant for some time before they’re either exploited, or somebody finally gets audited and runs across them.
But overall you end up with ineffective risk evaluation because of the fact that you don’t have visibility into these assets, and that can have a significant impact on compliance at audit time, well as to your overall risk reduction program. And of course can result in security breaches. Ultimately we’re doing all of the security tasks to try to prevent breaches, having systems that you are not keeping an eye on can result in a significant weakness, and ultimately lead to a breach.
Some common blind spots include a variety of different areas. You have sensitive systems which can be production systems that are involved in transaction, processing, and critical batch processing during peak seasons for online retailers. We’ve seen critical banking infrastructure and backend systems that are not always necessarily that you don’t have the right to scan those systems. Credentials management can be a challenge for those because of their sensitivity level. They may not be running the services that are needed to remotely assess those devices, and in a lot of cases you could have regulated devices in the healthcare industry for example. A lot of FDA approvals require configuration management. You may not necessarily be given the rights to log in and execute arbitrary commands, because that might break some of the components of the system that have been FDA approved, so we see a lot of cases where really sensitive medical appliances and systems involved in healthcare can have restrictions on being accessible.
Legacy systems, including mainframes and big data systems, are automation today is really focused on the areas that have the broadest deployment. So, you’re looking at you know most technologies for assessment today include all of your common Windows and Linux varietals, in a lot of cases your UNIX platforms and maybe some well well-known and highly common networking equipment.
But when you start getting into telephony and appliances, and other devices that kind of sit on the peripheral of the network but there’s still critical to operations, those devices you may not have visibility into. And then of course you have air gap networks. There’s always that in portion of your environment that has been isolated. We see this in a lot of control systems. We see this in really critical data warehouses where data is being replicated off line, or out of band into an air gap network. A lot of IoT environments are isolated on air gapped networks, and really sensitive control systems, but hackers have found ways to bridge these air gap networks in unique and interesting ways. Leave that to another podcast, but the system still could have vulnerabilities, they can have misconfigurations and they’re open to insider attack. So, we can’t just ignore them because they’re isolated from the Internet.
And of course appliances can be another big area. A lot of appliance solutions today, they’re either running a hybrid Linux operating system that has a stripped down kernel or limited command availability. It doesn’t follow a standard approach of a Linux operating system, so you can’t just treat it as a Linux target and assess its configuration, and the controls may be completely different. So, these appliance based operating systems a lot of times they have their own shell front end, if they even have a shell, or some other command and management infrastructure or API driven infrastructure components that need to be assessed in different ways. Typical scanning approaches don’t necessarily work on those.
So, today organizations use a variety of options to deal with these if they’re dealing with them at all. In a lot of cases there’s the mindset that these are secure systems, they’re isolated, they’re locked down. I don’t need to worry about them, I don’t need to include them in my compliance reporting because I know that I’m doing a lot to lock them down. But, as we’ve seen time and time again, configuration drift, new vulnerabilities all of these things that I mentioned earlier continue to drive security misses in those environments.
Some organizations, more mature security organizations that realize these facts, have implemented a variety of ways to deal with the ad hoc scripts where they can run a script and get an output that tells them what the health is, procedural control assessments where you build a questionnaire and you send that to an internal system administrator and ask them a bunch of questions about the devices configuration and what types of regulatory controls have been put in place. External audits by paid auditorsm the downside to that approach is it doesn’t really scale well. There are some limited software based solutions where you can run a tool kit or something to gather some information manually in these environments.
But, you know the challenge there is getting back into a unified reporting environment so you can see it in the context of your entire environment is a significant challenge. There are definitely things you need to take into account, at least implement some type of automated approach. Procedural control assessment is a great way to at least make sure that you have a repeatable process in place for configuration assessment. Make sure that you’re gathering inventory data and storing that in your inventory database, in your CMDBs, in your inventory reporting systems. Make sure that these systems have visibility to the InfoSec and the audit teams, and that they’re aware of the compensating controls that might be put in place to protect these systems.
Qualys is introducing our new out of band configuration assessment tool which uses a flexible data collection approach where you can use an API or manual data collection to pull these critical data back into the Qualys platform, and then analyze them just as if you had scanned the device.
We support pulling that data in for inventory with our Qualys Asset Inventory product, policy compliance apps, as well as vulnerability management. You can completely automate and customize this to fit your environment. Qualys SCA allows you to cover your key blind spots by inventorying and assessing these isolated or inaccessible assets. Allows you to use an API or a UI to gather the information imported into the platform, allowing you to tie it into your existing manual processes, and automate everything that you possibly can. Provide you complete compliance visibility, giving you the ability to assess those isolated locked down systems for misconfiguration and vulnerabilities, and it provides a broader platform coverage because it allows us to extend coverage to legacy and uncommon platforms including network devices, applications, appliances, mainframes and more to come.
So, thank you for listening to this podcast today. If you have any questions of course visit our website. We have a variety of information and trial signups for all of our Qualys apps, and have a great day and secure environment using automation.