Drowning Dalek commands Siri in voice-rec hack attack
University boffins have brewed one of the most complex mechanisms for loading malware onto phones by way of surreptitious Google Now and Siri voice commands hidden in YouTube videos.
For the attack to work, phones need to be in a state where they can receive voice commands – a feature often left unlocked – and close enough to the audio source for the instructions to be understood.
There’s also the trouble of masking the phone’s beep which is an acknowledgment that voice commands have been received and could alert users to an attack.
However the boffins, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, and Wenchao Zhou of Georgetown University, together with Nicholas Carlini, Pratyush Mishra, and David Wagner of the University of California, Berkeley, have found mitigations.
They write in the paper Hidden Voice Commands [PDF] that the give away beeps can, under very precise conditions, be countered with noise cancelling from the same attacking audio source.
Users will most likely just ignore the beep, LED flashes, and vibrations that Apple and Google bake into their operating systems as alert systems.
The paper builds on existing work and shows that
- Hidden voice commands can be constructed even with very little knowledge about the speech recognition system in ‘significantly improved’ attacks.
- Adversaries with significant knowledge of the speech recognition system can construct hidden voice commands that humans cannot understand.
- Detection and mitigation strategies can limit the effects of the above attacks, such as speech recognition and audio CAPTCHAs.
In a proof-of-concept video the boffins place a phone in an empty conference room three metres (10 feet) from a speaker. Commands are issued that sound to like a drowning dalek to Vulture South’s ears. That garbling makes the commands difficult for humans to understand but passable for Siri and her ilk.
The attackers activate airplane mode (a denial of service attack), and open website xkcd.com which they write in the paper could be substituted for a phishing or malware download site.
It is a successful effort, yet malware writers will find much more success baking their malware into fake Pokemon Go apps, particularly those aimed at Canadians, Europeans, and residents of Singapore who cannot yet download the viral app and can therefore be tricked into dodgy substitutes.
Bootnote: The commands are easy to understand in the proof-of-concept video thanks to priming effects; if you hadn’t read this article, you wouldn’t know words were spoken, so the researchers say. ®