Keyboard sounds can reveal secrets: researchers – Security – Hardware


A group of English academic researchers has trained a deep learning model to try and reveal secrets by recording the sound of keystrokes – and claimed as much as 95 percent success.



Successfully identifying keystrokes by their sound would create a side-channel attack against user passwords.

In a pre-publication paper released at arXiv, Joshua Harrison, Ehsan Toreini, and Maryam Mehrnezhad said their best success was in using their model to classify keystrokes recorded on a nearby phone, but even recording keystrokes over a Zoom call reached 93 percent accuracy.

Their fully-automated “acoustic side-channel attack” (ASCA) was based on training the model against keystrokes recorded on the target laptop: “36 of the laptop’s keys were used (0-9, a-z) with each being pressed 25 times in a row, varying in pressure and finger, and a single file containing all 25 presses,” the paper states.

That means the ASCA needs access to the target’s machine to work, so that the attacker can associate particular keystrokes with the sound they make. 

However, the researchers wrote, by using a MacBook Pro 16-inch as their experimental machine, because the available variants all use the same keyboard: “The small number
of available models at any one time (presently three, all using the same keyboard) means that a successful attack on a single laptop could prove viable on a large number of devices”, the paper states.

Classification was performed not on the audio, but on a visual representation of the audio, because the researchers chose the Google-developed CoAtNet image classification deep learning model.

The captured keyboard sounds were converted to a visual representation called “mel-spectrograms”, which represent a signal’s loudness as it varies over time, and at different frequencies.

The researchers said keystroke sounds change a little depending on the key’s position on the keyboard; by way of evidence, they added, most mis-classification identified the key adjacent to the one pressed.

Interestingly, the user’s typing style also affected the accuracy of classification: the model was only two-thirds as accurate when the user was a touch typist.

Joshua Harrison is from Durham University; Ehsan Toreini is from the University of Surrey; and Marya Mehrnezhad is from Royal Holloway University of London.



Source link