X

Study: YouTube Videos Pose Voice Command Risk To Smartphones

YouTube has changed the way everyone watches videos. With content being uploaded every minute and all of which can be accessed anywhere, entertainment is always just a few taps away. However, a recent study by researchers at Georgetown University points out that YouTube videos could potentially be used to issue malicious voice commands to smartphones.

Voice commands are a wonderful convenience that allows users to interact with our smartphones in a quick and easy manner. In recent years, always-listening functionality has become popular. This allows a device to receive voice commands without the user physically interacting with it, in some cases, even when the device’s screen is turned off. While this can prove incredibly useful in situations such as receiving an incoming call while driving, the threat it poses is the ability of the device to pick up audio from other sources. If a YouTube video, for example, had a hidden voice command within it, the command could trigger your phone’s voice assistant to perform an unwanted action, such as visiting a URL that contains malware. These commands can be indistinguishable to the human ear, while still being capable of triggering the voice assistant software. Using this technique, your smartphone could be triggered by a command issued not only by a video that it is playing, but also by a video you are playing from a computer while your smartphone is nearby. According to one of the researchers, Micah Sherr, “it’s a numbers game”. While the technique would not work with everyone who watches a video with a hidden voice command, it could be likely to work at least some of the time, so for a popular video that could put a large number of potential victims at risk.

There are some protections against this that are side effects of already existing features of the software. For example, some voice assistants personalize voice recognition to the sound of a specific user’s voice. Voice assistants also typically use audible prompts and on-screen animations during interactions, so it would be difficult for a voice command to be carried out without the user knowing about it. Another possible way to protect against this threat would be to include filters in the voice assistant software that can identify whether a voice is human or computer-generated. The good news is that because this potential vulnerability is known, the appropriate steps can be taken by developers to reduce any of the risks.