Thoughts on Biden’s cybersecurity Executive Order

Colonial Pipeline is a major American oil pipeline system that originates in Houston TX and supplies gasoline and jet-fuel to a significant portion of the US, specifically the South-Eastern US. The ransomware attack that hit their computing environment brought their network (and operations) to a grinding halt and fueled gasoline shortages in various parts of the country.

Biden's cybersecurity Executive Order

A few days after the devastating Colonial Pipeline ransomware hack, the Biden Administration released a new Executive Order on Improving the Nation’s Cybersecurity Posture. The timing of this move does not seem accidental.

The Colonial Pipeline attack is one amongst several attacks that ravaged US companies and national infrastructure in significant ways. In the last year and a half, we’ve witnessed major breaches that have had devastating and broad impact across companies in the private sector and at the local and federal level.

The compromise of SolarWinds enterprise solutions and the recent Microsoft Exchange zero-days have had a tremendous impact on the security posture of many US organizations, and it was just a matter of time before the US federal government took steps to act on these threats.

At first glance, I was quite impressed with this Executive Order. I found it not only to be relevant and well thought out, but also quite comprehensive in terms of scope. The EO starts off with explaining that the federal government should lead by example and that the contents of the EO will be oriented towards federal information systems.

While there’s some focus on threat intelligence sharing between different agencies and between providers and federal agencies, I am going to be focusing this article on the more preventive security measures outlined in the EO, specifically relating to modernizing federal government IT infrastructure, supply chain security and vulnerability management.

Zero trust and the cloud

The EO extensively discusses the cloud and the concept of zero trust, and in my opinion heavily leans towards the cloud, simply because implementing zero-trust environments in traditional, private data-center environments would not only be a massive undertaking, but a needless and foolhardy one, since cloud providers give you the infrastructure to quickly make zero trust a reality.

I appreciate the focus on zero trust, simply because I think it’s high time that people abandon this notion of “perimeter security”. Organizations’ networks have become quite porous due to the distributed nature of their workforce, operations, and the overall landscape of applications they are utilizing (SaaS, PaaS, etc.).

Zero trust places a higher emphasis on distributed security controls, especially for applications in/on the cloud, which means, a higher emphasis on:

  • Identity and ways of establishing identity in a distributed world
  • Micro-segmentation of networks
  • Multi-factor authentication (MFA)
  • Least privilege access control for applications and data

Zero trust goes hand-in-hand with the move to the cloud. With the cloud, it’s very important for companies to change their approach to implementing security measures. Leveraging a “user-access-first” approach to security makes security architecture and engineering choices much more scalable than an old-school “perimeter security” model.

SBOMs are da bomb!

Supply chain security is finally getting its day in the sun, and I am thrilled about it. Every time we build applications, package them in containers and deploy them in myriad environments, we’re probably invoking a 1000+ dependencies to make it happen. These dependencies play a significant role in the overall security of our applications.

In fact, according to source composition scanning provider Snyk, the biggest cause of breaches mapped to the OWASP Top 10 Application Security Vulnerabilities was “Known Vulnerable Components,” i.e., third-party dependencies being exploited by attackers to compromise your applications. Some of the largest breaches in the world, including Equifax, can be directly attributed to vulnerabilities in third-party dependencies.

The EO has essentially highlighted the need for supply-chain security with several meaningful initiatives that have to be taken into account by federal agencies, including but not limited to:

  • Software Bill of Materials (SBOM). This is the big one. SBOM is essentially a standardized listing of the dependencies used by your application. Think of it like a “Nutrition facts” label for your application. The SBOM typically lists all the different libraries your application is composed of, along with version and license information, other information related to provenance, etc. There are some standardized formats for SBOM that are in use by industry today, including CycloneDX, SPDX and so on.
  • The bigger emphasis in the supply-chain segment of this EO mandates a need for continuous management of SBOMs and identifying and remediating vulnerabilities continuously and in an automated manner. This will go a long way in bringing even the most change-averse and luddite environments to the modern age, where they have to adapt to the way things are supposed to be done.
  • This requirement has also been mandated for software that is purchased by the federal departments. I am glad to see this, because have software vendors have generally gotten away with doing nothing to improve the cause of software security. Because of this requirement, not only the federal departments but their vendors will have to get their houses in order.

DevSecOps, automation and vulnerability management

One can’t help but notice the several allusions to automation in this document. Automation has been mentioned right from the supply-chain and code stage to the vulnerability management stage to the detection of threats. The EO has done everything except mention DevSecOps.

To scale all the requirements of the EO, organizations will have to implement automation at multiple stages of the software development lifecycle (SDLC). This will mean that they will be able to identify security exceptions earlier in the SDLC.

I have been a VERY vocal proponent of DevSecOps. Doing security effectively relies on adopting multiple security feedback loops in the SDLC. From threat modeling to threat hunting, DevSecOps is less about automation and more about adopting a way of doing security that keeps pace with the increasing speed of application development and delivery.

That said, I think that organizations should adopt the following approaches to stay in line and even ahead of this EO:

  • Agile threat modeling – Threat modeling is great but ensuring it is done effectively as part of the SDLC is better. Believe me, this approach works. I have trained scores of teams that have done threat modeling in increments and have found it to be a blueprint for their entire application security program
  • Strategic security automation – Leveraging automation for static analysis, source composition analysis and SBOMs and dynamic/interactive security analysis would be the way to go to not only identify but also remediate vulnerabilities early in the cycle (another requirement of the EO)
  • Vulnerability management tooling – If you need to do this at scale, it makes sense to look at comprehensive vulnerability aggregation and management process to be able to make sense of these vulnerabilities at scale across multiple applications and releases. Tools like Orchestron, DefectDojo, Nucleus, etc. can help you do this at scale.

Overall, I think this EO is definitely a step in the right direction. It’s quite clear and prescriptive and, at the same time, relevant and pertinent to the current scenario. It also keeps in mind the near- and long-term future as well. And while its implementation across multiple federal departments and their cascading effects on federal vendors and, hopefully, the entire software ecosystem remains to be seen, I for one appreciate the initiatives introduced as part of this Executive Order.

Don't miss