An eavesdropper can tell what someone is typing from the sound the keyboard makes.
The idea is that different keys tend to make slightly different sounds, and although you don't know in advance which keys make which sounds, you can use machine learning to figure that out, assuming that the person is mostly typing English text. (Presumably it would work for other languages too.)
the recognizer bootstrapped this way can even recognize random text such as passwords: In our experiments, 90% of 5-character random passwords using only letters can be generated in fewer than 20 attempts by an adversary; 80% of 10-character passwords can be generated in fewer than 75 attempts. Our attack uses the statistical constraints of the underlying content, English language, to reconstruct text from sound recordings without any labeled training data.
Soon, high-security keyboards will have to generate spurious noises that mimic the sounds of keys. Maybe it would suffice to record the sound of a key being struck, and replicate it some number of times sith some random delay between iterations.
It might still be possible to work out the mapping between the sounds and the keys, but it should still be very hard to work out which keys are being struck precisely when.
From a comment to Bruce's post: One of those projection keyboards, which projects a keyboard layout onto a convenient flat surface and detects where your fingers hit, would also circumvent this attack. You might want to program the display to black out and shift to a slightly different location every few minutes.
No comments:
Post a Comment