Like most of the security community, I have spent hours digesting the recently released U.S. House of Representatives Committee on Oversight and Government Reform report on the Equifax breach. I read the report with a mix of heartfelt empathy and fear-inducing understanding of some of the findings.
I feel empathy because my role has offered me a unique view on the size and scope of the threats facing many organization; fear because some of the practices outlined in the report are incredibly common across enterprises large and small. I have no insider knowledge of what occurred at Equifax, but here are some of the things in the committee’s report that immediately stand out.
“…Equifax’s Global Threat and Vulnerability Management (GTVM) team disseminate US-CERT notification internally by email requesting responsible personnel apply the critical patch within 48 hours.”
Does giving development teams 48 hours to patch a framework on legacy systems with code from as early as the 1970s seem reasonable? A 48 hour deadline might be appropriate for a patch to a desktop system. But this was a far more complicated endeavor.
Colin Powell has been quoted as saying “Experts often possess more data than judgment” and that seems to be the case here. The GTVM team had the data to show that the Struts vulnerability was something they should worry about. However, setting an arbitrary deadline to patch Struts without having the full technical details makes it appear as if they were treating it in the same manner you would handle a standard desktop patch.
“…Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate. On July 29, 2017, Equifax updated the expired certificate and immediately noticed suspicious web traffic.”
Equifax had purchased and installed a tool in their data centers that was capable of detecting this attack. It was inactive for over 19 months. In this timeframe, they had likely paid for support and maintenance on these devices.
How does this happen? My best guess is that maintenance on this visibility layer got deprioritized due to other more pressing issues. In hindsight, the security community can say that Equifax could have prevented the breach with a simple fix. In reality, they, like just about every other organization in the U.S., had countless vulnerabilities to address and tools to manage, and they did not have an effective system for prioritizing the things that mattered.
Where are the developers?
“Developers provided Vulnerability Assessment employees with the application’s WAR file – a compressed package containing all of the files and other Java components used to run an application. The WAR file confirmed the ACIS application was running a vulnerable version of Apache Struts.”
When I read this section, a few questions popped into my mind. How does a report about one of the largest recent data breaches only involve the development team responsible for maintenance when they needed a copy of the source code? Did the Global Threat and Vulnerability Management (GTVM) reach out to them? What role did the developers play in the incident response and remediation process?
Also: Wait, what?
“According to Payne, the company was ‘lucky that we still had the original developers of the [ACIS] system on staff.’”
The fact that this report is amazingly light on developer-driven action makes it clear that Equifax really didn’t have a firm understanding of the fundamental differences between their standard vulnerability management and application security management programs.
“Equifax disseminated the US-CERT notification via the GTVM listserv process. Approximately 430 individuals and various distribution lists received this email…”
“On October 2, 2017, Equifax terminated Graeme Payne, the Senior Vice President and CIO for Global Corporate Platforms tasked with managing the ACIS environment. Payne was a highly-rated Equifax employee for seven years prior to the data breach.
Payne told the Committee he was called into a meeting with two human resources employees who advised him he was being terminated as a result of the incident investigation. When he pressed for more information about the investigation, human resources declined to provide any documentation for the investigation, but told Payne he failed to forward an email.”
After all of this comes the notion of accountability. After all the failed controls, the unreasonable demands, and the missing developers, the one action that was taken was to fire the CIO for not forwarding an email that over 430 people had already seen. That allowed the board to make the following misguided statement:
“The human error was the individual who is responsible for communicating in the organization to apply the patch did not.”
Even months after the breach occurred they are still trying to act like this boiled down to something as simple as the CIO not installing a Chrome update. This drives straight to the fact that security is complicated. Management teams too often want easy answers.
The fact of the matter is, Equifax could have done better. Were there failures? Yes. Are there other organizations that could fall prey to a breach that is just as serious? Yes. Every large enterprise in the U.S. faces more vulnerabilities than it can manage, pays for security tools it doesn’t use or understand, and has applications built upon systems made for mainframes designed before most of us were born.
Security is hard, and like anything that deals with risk, it is difficult to address across a large organization. The conditions described in the committee’s report exist in nearly every major U.S. company. Developers and executives are held accountable for these issues, even when they don’t have the resources or authority to manage them.
The committee’s report will likely raise a lot of eyebrows in the media and among regulators. There are a lot of good soundbites there. For security professionals, what happened at Equifax is the result of a predictable, and not unusual set of circumstances.