Laptop users are at risk of having their sensitive information stolen by artificial intelligence (AI) that can identify keystrokes by sound with 95 per cent accuracy, according to a new study by researchers from British universities.
The study, authored by Joshua Harrison, a software engineer at Amazon, University of Surrey lecturer Ehsan Toreini, and Royal Holloway University of London senior lecturer Maryam Mehrenzhad, details what it calls “acoustic side channel attacks” in which a malicious third party uses a secondary device. It is like a cell phone sitting next to a laptop or an unmuted microphone on a video-conferencing software such as Zoom, to record the sound of typing.
The third party then feeds the recording through a deep-learning AI trained to recognize the sound of individual pressed keys to decipher what exactly was typed. The researchers tested their attack on a MacBook Pro and found that they could correctly identify keystrokes 95 per cent of the time when the recording was made with a nearby phone and 93 per cent of the time when the recording was made during a Zoom call.
The researchers say that the threat of acoustic side channel attacks is growing as AI technology becomes more sophisticated. They recommend that laptop users use noise cancellation technology to mask the sound of typing or developing AI models that are better at recognizing the sound of shift keys.
The sources for this piece include an article in Fortune.