You’d think with all the recent discussion about consent, researchers would more carefully observe ethical boundaries. Yet, a group of researchers from the University of Minnesota not only crossed the line but ran across it, screaming defiantly the whole way. In response, the Linux Foundation, which is the core of the open-source community, took the unprecedented step of banning the entire University of Minnesota from contributing to the Linux kernel.
The open-source community is built upon the principles of trust, cooperation, and transparency. This group donates time and high-value industry skills to create, maintain, and improve free and widely adopted software in the interest of making technology more accessible.
Yet, a group of researchers abused this community’s trust by not only sneaking vulnerabilities into the code base but then effectively bragging about it in the name of research. In February, a team from UMN published a research article outlining how they systematically and stealthily introduced vulnerabilities into open-source software.
They did this through commits that appeared beneficial but, in actuality, introduced critical vulnerabilities. Though stating it targeted open source as a whole, much of the researcher’s attention was aimed at the Linux Kernel – the foundation of the operating system and manages the interactions between hardware and applications.
“Experiments” like this without informed consent call into question the very ethics that even the most novice of cybersecurity professionals learn. Moreover, after publishing the paper, they continued this non-consensual test until they were called out publicly in the Linux Kernel Mailing List. Reviewers had identified that numerous bad patches had continued to come in. When confronted, the researchers dismissed the concerns claimed the code recommendations came from a static analyzer that they were still developing.
The Kernel group responded, pointing out that “They obviously were _NOT_ created by a static analysis tool that is of any intelligence, as they all are the result of totally different patterns, and all of which are obviously not even fixing anything at all.”
In light of what appears to be blatant deceit and unwillingness to take responsibility, the Kernel group had no choice but to draw a hard line in the sand. They pointed out that this was not consensual and that testing tool data like this is generally explicitly stated. This specific research team’s history of unrepentant abuse and the failure of their University to remediate it left the Kernel group with little choice. They banned the entire University from future contributions and are working to remove all prior submissions.
This whole situation revolves around the concepts of ethics within the cybersecurity profession. When conducting cybersecurity research, how do you seek consent when consent might alter the findings? Do the ends justify the means?
The importance of ethics in cybersecurity research
The researchers in the above situation could have behaved more ethically and still managed to do their research.
One option is that the research team could have started a new open-source project owned and managed by them. By being the owners of the project, they oversee the final commit process. This would have allowed them to inject subversive code for general review along with what others submit to see what survives the process, as was the goal of their research. Once it hits the final approval gate, they could remove the bad submissions, preventing anything dangerous from going live.
Alternatively, they could have worked with the Linux Foundation to conduct this research as a controlled experiment. Getting the consent of that Foundation means that the admins would know which submissions were subversive, allowing them to be filtered before going live.
While both of these options reduce the risk of a vulnerability making through to a live product that people depend on, it still skirts the ethics of what then amounts to a social experiment on individuals who donated their time and skill in good faith. This is undoubtedly better than the path they chose but still not completely ethical. Experimenting on or with human behavior is always a tricky proposition.
As it is, the path they chose and their reasoning behind it harkens back to the earliest days of technology, when the line blurred between good-faith security testing and cybercriminal. This was the impetus for legislative intervention and a code of ethics within the hacking / cybersecurity community. Ethics are the critical line that divides a hacker / White Hat from a bad actor / a Black Hat.
Consent in security
To be taken seriously and not confused with the criminal element, the cybersecurity community openly embraces an ethical approach to all activities. Explicit examples of this are published by SANS and EC-Council, and the latter explicitly mentions getting a third party’s consent. This is vital because even in the best circumstances, security researchers conducting their jobs responsibly and appropriately may be forced to defend themselves against legal issues.
Finding security vulnerabilities is essential to the development of security products and processes because the last thing anyone wants is for the criminals to show us our weak points via an attack. Ignoring consent risks painting research as a cybercrime. Banks definitely want to know if there are weak points in their security. Yet, they aren’t favorably disposed to the stress and expense of an incident. No one wants their security team mobilized, customer accounts locked, and systems pulled offline because a random “researcher” was testing.
This is the reason that penetration testing companies exist. The role of a penetration tester is scoped. When they are hired, the parameters and limitations of the “experiment” or test are clearly defined. There is even a solid ethics component in the education necessary to become a certified pen tester.
Ethics in research
Much like in the realm of security, the scientific community also subscribes to a set of ethical guidelines about how they conduct their research. Specifically, at UMN, they have an Institutional Review Board (IRB), which outlines what research with human subjects is acceptable and is intended to review and approve studies with human subjects.
Apparently, the IRB at UMN does not consider the Linux Kernel developer community to be humans as, according to the research paper, they provided an exemption to the team. I am not sure how researching how a development team reacts to subversive behavior is not a study of humans or human behavior. However, my expertise is cybersecurity for a reason. We also should consider the possibility that the IRB at UMN might have been misled.
The fact that UMN has recently launched an investigation seems to support that possibility.
Regardless of the IRB’s decision, the researchers still knowingly elected to conduct a study on unwilling participants. This raises the question of whether they felt the end results in this research justified the questionable means of accomplishing it. Throughout the history of science, this has been a long-running debate. IRBs were put in place at institutions to prevent the kind of abuses that can occur without consent (e.g., in the Tuskegee Syphilis Study). While this incident of non-consensual experimentation pales next to that and other brutal historic cases, the fact remains that the slope is slippery.