Did you know that Dick Cheney, former US Vice President who held that office from 2001 to 2009, had the wireless telemetry on his implantable cardioverter-defibrillator disabled during his time in office for fear of political assassination?
That was in 2007, and already the fear of what hackers could do to implanted medical electronic devices was real.
Now, almost ten years later, the fear must be even bigger for those who need to use implants, as the realization that all electronic devices can be tampered with by a motivated attacker is slowly becoming widespread knowledge.
Researchers have already proven that attackers could mess with people’s insulin pumps and implantable defibrillators. With the increased use of electronic brain implants, we can assume some of them will begin testing the security of those devices, as well.
A group of researchers, neurosurgeons, and doctors of philosophy from Oxford Functional Neurosurgery and several Oxford University departments have recently published a paper exploring the issue of brain implant hacking (“brainjacking”).
Neuroimplants are used to treat a wide range of neurological and psychiatric conditions – Parkinson’s disease, chronic pain, depression, etc. – and will likely be used for an even wider range of ailments, as well as a way to correct “abnormal moral behaviour,” in the future.
“Until recently the risk of neurological implants being used against their users was firmly in the realm of fantasy. However, the increasing sophistication of invasive neuromodulation, coupled with developments in information security research and consumer electronics, has resulted in a small but real risk of malicious individuals accessing implantable pulse generators (IPGs),” they noted.
These implants, therefore, have the potential of being switched off or made to function in undesired ways by unauthorized persons, leading to tissue damage, increased pain, altered impulse control, unwanted mental conditioning, and more, all to the detriment of the people who need these implants.
“The current risk of brainjacking is low,” the group has noted, but “it is better to consider this issue seriously now, rather than in a several years’ time when the sophistication of these implants is far greater, as would be the harm that an attacker may cause by subverting them.”
In the paper, they addressed a number of attack scenarios that might be pulled off even now, but added that there is no evidence that any of them has ever been attempted. Although, even if they had been successfully performed, it’s likely that they might not have been noticed.
“Wireless exploitation of implants is also likely to be subtle – device failures are a somewhat common eventuality and post-failure device diagnostics are rarely performed. Even if an attack were detected, tracking down the attacker would be a highly challenging task,” they noted.
Secure implant design
The group has delved into the current secure implant design, and the different factors that manufacturers have to weigh when adding features to these implants. The balance between usability and security is rarely so crucial to achieve.
“It may be valuable to develop codes of best practice for neurosecurity, or to formulate overall guidelines for medical device security that can be tailored to the specific requirements of neural implants. Any such code should be formulated to encourage cooperation between stakeholders and be sufficiently flexible to adapt to the rapid pace of change in neurological implant design,” they pointed out.
“Device manufacturers must strive to improve upon recent advances, ensuring that security concerns are considered throughout the design process and not relegated to an afterthought, and should cooperate with security researchers who seek to responsibly disclose design flaws. Regulatory bodies must balance use of their powers to encourage good neurosecurity practices with the risk of impairing real-world security through overly burdensome regulations.”
“Given that neurosecurity is not an immediate concern, there is sufficient time for manufacturers and regulatory agencies to carefully consider methods of risk mitigation. While there is a responsibility for manufacturers to make their devices secure, the expected value of any novel security features should be carefully weighed against other clinically relevant factors, and innovation should not be unduly stifled by the demands of neurosecurity,” they concluded.