How EU lawmakers can make mandatory vulnerability disclosure responsible

There is a standard playbook and best practice for when an organization discovers or is notified about a software vulnerability: The organization works quickly to fix the problem and, once a fix is available, discloses that vulnerability for the benefit of the community. This playbook is not always perfect, but it strikes a reasonable compromise between providing time to fix a vulnerability and disseminating that knowledge to help prevent similar vulnerabilities in the future.

EU vulnerability disclosure

The newly proposed EU legislation named the Cyber Resilience Act (CRA) is threatening to upend this practice, by requiring companies to report vulnerabilities before they’re adequately patched. If this legislation is passed in the currently proposed form, the result could be disastrous.

The main problem: Information misuse

We generally support the goals of the Cyber Resilience Act. As currently written, much of the legislation would improve EU’s cybersecurity for the better. But requiring companies to report unpatched vulnerabilities will likely have the opposite effect and make organizations and EU citizens less secure.

Under the CRA (in its current form), this is how the reporting requirement would work: when a manufacturer identifies an actively exploited vulnerability, the manufacturer has 24 hours to report it to the European Union Agency for Cybersecurity (ENISA). Information and details about the vulnerability would then be forwarded to the Member States’ Computer Security Incident Response Teams (CSIRTs), as well as Member States’ market surveillance authorities.

The primary problem here is that this information can be misused.

Historically, identified but unpatched vulnerabilities have remained undisclosed for a reason: the moment bad actors catch wind of the issue, they will race to exploit. While the CRA doesn’t demand companies forward an exploited vulnerability’s full technical specifications to ENISA, it does require companies to report on a vulnerability “with details”—and these details could be more than enough to attract the attention of a savvy attacker. As the CERT Guide to Coordinated Vulnerability Disclosure puts it: “Mere knowledge of a vulnerability’s existence in a feature of some product is sufficient for a skillful person to discover it for themselves.”

Plus, if all unpatched vulnerability reports are first pooled in the same place and then distributed to 27 different governments across the EU, the resulting stockpiles will become irresistible targets for attackers. The CRA (as currently written) also does not adequately safeguard against government agencies getting their hands on this information (and potentially misuse it).

The CRA also risks providing cover to other problematic vulnerability disclosure law, specifically China’s Regulations on the Management of Network Product Security Vulnerabilities (RMSV). This 2021 law requires China’s network product providers to promptly notify the Chinese government about any vulnerabilities detected in network products. Research studies suggest that these vulnerability disclosures are now being used for intelligence purposes, and that the regulation might have harmed software providers’ access to vulnerability information by dissuading ethical hackers from searching for vulnerabilities in the first place.

By encouraging ethical hackers to search for and report potential vulnerabilities, companies have safeguarded both their own businesses and the data of their customers. But the CRA may have a chilling effect on this kind of good-faith security research and businesses may decide to take an ignorance-is-bliss approach—after all, if an ethical hacker does surface an unpatched vulnerability, the company will have to report it to ENISA with all the potential risks associated. For many businesses, it may simply not be worth it.

How can regulators in the EU more effectively maintain our security?

I have a few suggestions:

1. In the absence of a significant security incident, laws should provide organizations with sufficient time to mitigate a discovered vulnerability before being forced to disclose it
2. The authorities should make sure that any vulnerability information thus disclosed is properly secured and shared only on a need-to-know basis.
3. Government agencies should be prohibited from using this information for other purposes (e.g., surveillance)
4. The law should make a greater effort to protect security researchers, by making a distinction between vulnerabilities discovered in the course of good-faith research and vulnerabilities exploited by malicious actors

Malicious actors already have many advantages in this current cybersecurity landscape – we should not be handing them another one. We are not alone in feeling that unpatched vulnerabilities should not be subject to immediate reporting: a statement from DigitalEurope expressing this view was signed by countless significant stakeholders, both public and private, including the Hacking Policy Council co-founded by HackerOne. Consumer protection groups have also expressed alarm at the CRA’s vulnerability disclosure requirements and urged EU policymakers to install safeguards against misuse.

Working together, the public and private sector must strive towards policy outcomes that enable vulnerability discovery and disclosure without putting businesses and consumers at needless risk. To that end, HackerOne in partnership with the Hacking Policy Council and our fellow signatories will continue to cooperate with the European Parliament, Commission, and Council and advocate for the kind of vulnerability reporting that keeps everyone maximally safe.

Don't miss