What’s at stake in the Computer Fraud and Abuse Act (CFAA)

Two weeks ago, the Supreme Court heard oral arguments in Van Buren vs. United States, the landmark case over the Computer Fraud and Abuse Act (CFAA). Nathan Van Buren, the petitioner in the case, is a former police officer in Georgia who used his lawful access to a police license plate database to look someone up in exchange for money. Van Buren was indicted and convicted of violating the CFAA for using his legal access to the database in a way it was not intended.


The fundamental question presented to the Supreme Court is whether someone who has authorized access to a computer violates federal law if he or she accesses the same information in an unauthorized way. While the question may seem trivial, this is a welcome and long overdue court case that could have a major impact on security researchers, consumers, and corporations alike.

Intended as the United States’ first anti-hacking law, the CFAA was enacted almost thirty-five years ago, long before lawyers and technologists had any sense of how the Internet would proliferate and evolve. In fact, the Act is outdated enough that it specifically excludes typewriters and portable hand-held calculators as a type of computer.

Since its inception, it has been robustly applied for basic terms and services breaches, like the infamous case of Aaron Swartz downloading articles from the digital library JSTOR, to indicting nation-state hackers and extraditing Julian Assange.

The core of the problem lies in the vague, perhaps even draconian, description of “unauthorized” computer use. While the law has been amended several times, including to clarify the definition of a protected computer, the ambiguity of unauthorized access puts the average consumer at risk of breaking federal law. According to the Ninth Circuit, you could potentially be committing a felony by sharing subscription passwords.

The stakes are particularly high for security researchers who identify vulnerabilities for companies without safe harbor or bug bounty programs. White-hat hackers, who act in good faith to report vulnerabilities to a company before it is breached, face the same legal risks as cybercriminals who actively exploit and profit from those vulnerabilities. Say, for example, that a security researcher has identified a significant vulnerability in the pacemaker that a healthcare company produces. If the healthcare company hasn’t published a safe harbor agreement, that security researcher could face up to ten years in prison for reporting a vulnerability that could potentially save someone’s life.

On the less drastic side, security researchers who work with companies to protect their systems face legal risk in their day-to-day activities. During a penetration test, for example, a client will list assets that are “in scope” for testing, as well as state what tests are prohibited (e.g., any action that causes a denial of service and crashes a server). A penetration tester could face legal liability and prison time for inadvertently testing the wrong asset that is “out of scope”—or accidentally executing a test that breaches authorized use. Arguably, engineers could face the same legal liability if they access the wrong database or push the wrong code.

On one hand, the broad and ambiguous language of the CFAA provides robust legal protection for companies and facilitates federal resources, like the FBI, if a significant breach occurs. Some companies have argued that narrowing the scope of the CFAA would not be damaging to security programs if companies are already contracting security services, including crowdsourced programs like bug bounty. One company received pushback from the information security community when it accused MIT security researchers of acting in “bad faith” by identifying vulnerabilities in its mobile app. Some companies have argued that the difficulty of attribution, meaning the ability to accurately identify a threat actor, makes it difficult to distinguish good actors from cybercriminals.

Yet the CFAA is a reactive measure that would be enforced following an incident. Companies should ideally be focused on preventative measures to protect against a breach before it occurs. It is arguably to the detriment of companies like Voatz, which serves the public through its voting app, that the CFAA is so broad, since security researchers may choose not to investigate or report vulnerabilities due to the possibility that they could be reported to the FBI. While attribution can be incredibly difficult, good faith security researchers will always identify themselves when they report a vulnerability. Unlike malicious actors, who will exploit vulnerabilities for their own gain, security researchers act to increase the security posture of a company and protect citizens from harm.

All companies should use security services, like penetration testing, bug bounty programs, and safe harbor, to quickly identify and triage vulnerabilities. However, security researchers all have different methods for testing and may not be able to cover all of the assets that a company owns. For example, an ethical hacker may be focused on exploiting a SQL injection in a database, he or she may miss exposed credentials on the Internet that allow access into a protected server. With the rapid pace of DevSecOps, engineers could be pushing changes a dozen times—or more—in a single day.

Revolutionary changes in the structure and pace of the Internet and the software that fuels it means that ad-hoc or occasional security testing is not enough to protect against vulnerabilities. We need the full force of security researchers, and all companies should encourage and protect their work.

Should the Supreme Court affirm van Buren’s conviction, the legal landscape will remain largely the same. Security researchers and consumers alike will face liability despite acting in good faith, and the federal government will continue to exercise broad power over trivial and ambiguous breaches of authorized computer use.

Yet the Supreme Court now has the opportunity to limit the scope of the CFAA and restrict what the federal government can prosecute. Doing so will enhance the security of the Internet, protect security researchers, and limit the legal liability of daily Internet users who clicked through terms of services without reading them.

A lot has changed since the CFAA was first enacted in 1984. While the Supreme Court’s decision could drastically change the information security landscape, it is still not enough. As we’ve seen with the Internet of Things bill that was recently passed through the House, the United States needs modern legislation to secure the rapidly changing technology of the twenty-first century.

In short, security researchers who act in good faith are exposing themselves to huge legal risk because of the broad interpretation of CFAA. This is to the detriment of anyone who values the protection of their information. We are in dire need of reform in the United States, but in the meantime, there is hope that the Supreme Court will narrow the scope of the CFAA to protect consumers and security researchers alike.

Share this