DIY attack surface management: Simple, cost-effective and actionable perimeter insights

Modern-day attack surface management (ASM) can be an intimidating task for most organizations, with assets constantly changing due to new deployments, assets being decommissioned, and ongoing migrations to cloud providers. Assets can be created and forgotten about, only to be found many years later when investigating the mystery web server under the office desk.

Jake Knott
Across our industry, the number of intrusions due to the compromise of public-facing applications or exploitation of known vulnerabilities is far too high.

How can you patch and harden what you don’t know exists?

The average organization has a sprawling estate that spans their premises, cloud and third-party hosting. Depending on the organization’s size, an analysis of its domains, subdomains, and allocated IP ranges often involves thousands, hundreds of thousands or even millions of assets.

A temporary misconfiguration or exposure could be introduced at any time, and while they can be remediated very quickly, our window of opportunity to detect these issues is small. For these reasons, attack surface management tooling must be extremely scalable and fast, balancing acceptable levels of accuracy loss to lower the overall time to find assets and detect ephemeral risks. By the time a traditional slow scan is completed on an attack surface with millions of assets, the results could already be outdated.

The rapid rise of ASM as a desirable industry niche has truly transformed it into an essential part of most organizations’ security strategies. The influx of industry attention has driven innovation and research into novel methods and techniques of identifying attack surfaces. Vast suites of open-source tooling have been developed to assist with ASM efforts driven by SaaS platform offerings or individuals looking to gain perimeter insights.

For the most part, ASM is a recursive discovery exercise, pivoting on new knowledge continuously to identify more and more assets and organizational context. An initial domain name or “seed data point” is usually all that is required to get started.

Many data sources used in discovering assets can operate entirely passively and can be done without interacting with a target organization’s infrastructure.

When performing basic discovery, you are aiming to answer a few initial questions:

  • What does my organization look like to an external attacker? Historical acquisitions, industry verticals, historical incidents.
  • How many domains does my organization control?
  • How many subdomains does my organization have?
  • How many network ranges does my organization have?
  • Which cloud providers are assets distributed across?
  • Of the discovered assets, how many have active DNS records?
  • Of the discovered assets, how many have an open port / targetable service?
  • How many of these assets are already in our asset register?

There are endless avenues that can be pursued to obtain additional information related to an organization, but these are a good starting point.

Organizations can quickly gain insights into various parts of their attack surface using standalone command line tools that allow for easy, repeatable, and scalable workflows that can help identify perimeter changes.

If enlisting the support of a commercial ASM vendor is not an option, building these workflows can assist with many security use cases, and provide competitive results in comparison to some paid tool offerings.

Here are some common security use cases that organizations can easily recreate using popular open-source tooling:

  • Discover subdomains associated with your organization’s primary domain: Using open-source tooling such as subfinder from Project Discovery, you can obtain information from various passive data sources such as certificate transparency to identify historical and current subdomains associated with a domain.
  • Identify assets across your organization with active DNS records: Using open-source tooling such as dnsx from Project Discovery or zdns from The ZMap Project can allow you to gain insights into assets with current DNS records across various query types. Additionally, identifying assets with current A/AAAA/CNAME records may enable organizations to prioritize assets for additional review and further enrichment.
  • Identify active web applications across your organization: Using open-source tools such as httpx from Project Discovery or zgrab2 from The ZMap Project allows you to identify and fingerprint web applications and their associated web frameworks. Also, creating easy-to-read CSV/JSON files containing common header information such as the HTTP server, HTTP title, favicon hash and storing web application responses can allow for easy identification of certain technologies in response to newly released vulnerabilities.
  • Identify common file exposures and misconfigurations: Using open-source tools such as nuclei from Project Discovery, an organization can rapidly assess their public-facing web applications for common misconfigurations and high-risk file exposures such as configuration files. Ensure you vet vulnerability templates appropriately according to your levels of acceptable risk. Some are intrusive and may leave behind artifacts.

The above use cases are by no means comprehensive or the most effective methodology to identify certain asset types. Still, they can easily provide an initial repeatable mechanism to determine just how much you don’t know about your perimeter, from which you can identify easy areas for improvement.

Finally, most organizations frequently have a significant online presence across third-party collaboration sites and code-hosting platforms that may inadvertently publicly expose sensitive information. The identification of leaked credentials and sensitive information often has significant dwell time if not immediately flagged by secret scanners and can result in exposures being public for a long time.

Organizations can stay one step ahead by monitoring public GitHub commits attributable to their primary domains in near real-time using open-source tooling that subscribes to the GitHub Event API. Whilst not a comprehensive method for detecting when secrets related to your organization are published, when combined with a GitHub pre-commit hook and wider security strategy, it can significantly improve the time to remediate secrets sooner and improve overall security posture.

Often, gathering a large corpus of data related to your organization is only a starting point, and is something that can be matured and improved over time with additional enrichment, organizational context, and insights.

Don't miss