Researchers from several UK universities have proven that the recorded sounds of laptop keystrokes can be used to obtain sensitive user data such as passwords with a high accuracy.
Sounds of keystrokes can reveal passwords, other sensitive data
Side-channel attacks (SCAs) are carried out by exploiting electromagnetic waves, power consumption, mobile sensors, and other device “emanations”.
But keystroke sounds can be as useful and are easier to collect.
“The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector, but also prompts victims to underestimate (and therefore not try to hide) their output. For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound,” the researchers explained.
Training the model
Researchers ran their experiment on a MacBook Pro 16-inch (2021), and used a iPhone 13 mini (placed 17cm away from the laptop) and the Zoom video-conferencing app.
They pressed 36 of the laptop’s keys 25 times and recorded the sound they made both via the smartphone microphone and via Zoom.
“Once all presses were recorded, a function was implemented with which individual keystrokes could be extracted. An energy threshold is then defined and used to signify the presence of a keystroke.”
Keystroke isolation process. (Source: A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards)
The sound wave of each keystroke was then visualized via a mel-spectrogram (a method of depicting sound waves), which were then used to train the CoAtNet deep learning model.
“When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model. When trained on keystrokes recorded using the video-conferencing software Zoom, an accuracy of 93% was achieved, a new best for the medium,” the researchers found. (The slight discrepancy is due to Zoom’s noise suppression feature.)
The researchers say that their results prove the practicality of these side channel attacks via off-the-shelf equipment and algorithms. And since we’re now surrounded by microphones – in phones, smart watches, smart speakers, etc. – all the time, avoiding falling victim to such an attack might be difficult.
There are risk mitigation actions users can take, though, and they include:
- Changing one’s typing style (but how realistic is that?)
- Using software that generates fake keystrokes or white noise
- Using randomized passwords with multiple cases (instead of full words)
- Using biometric authentication instead of passwords to avoid data input via keyboard