X

SRLabs Hack Shows It's Easy To Steal Your Data With AI Assistants

Home AI assistants can be both fun and useful but users may want to exercise more caution following the discovery of a hack to listen in and steal data from both Google and Amazon devices.

The “malicious” code was written out and tested by “White hat” hackers at Germany-based Security Research Labs (SRLabs). That means that the hack didn’t necessarily cause any users harm. The idea was to showcase vulnerabilities in both Amazon’s Alexa and Google’s Assistant. In effect, the group wanted to show how bad actors could listen in on users without permission or attempt to steal private data.

Perhaps more concerning, each of the apps — created as Alexa Skills or Google Actions — passed Google and Amazon’s review processes. The companies have since altered that process and claim that no users are likely to have been impacted by the vulnerabilities.

How did SRLabs pull this off?

SRLabs created a total of eight apps, four in English and another four in German. Each utilized the same basic flaws in both Google Assistant and Amazon Alexa to accomplish the tasks. Each also depended on the basic building blocks put in place by the respective companies. While each platform has underlying differences, it wouldn’t necessarily have been difficult to recreate the malicious activity.

In the first case, SRLabs designed the apps to listen in on users. That hack centered around the use of characters that can’t be spoken by the assistants. That was used in combination with built-in means to keep the listening process going, specifically the building blocks that allow users more time to talk to the assistant.

From the user, perspective, the hack effectively left the assistant silent over extended periods of time. That was used to give off the impression that the app called forward by the user was no longer running.

Developers implemented key stop phrases traditionally used to end interactions with apps. They stacked those into the code after being accepted onto the platforms. But those were used alongside malicious code to keep the logging activity going instead of stopping the interaction.

Then, the app simply logged what the microphones ‘heard’ before sending that data off to the developer’s servers.

For stealing data, that pause was used to make users think the system had shut down app interactions. But in that case, the code was used after the app reported what sounded like a standard error to the user.

In the background, the app switched to a voice output that was similar to the respective system’s stock voice. That voice was used to request user data — in this case, a password. The password, the malicious apps told users, was required to perform an update.

These were essentially just tests, so what are Google & Amazon doing to protect your data?

The hack-laden apps could just as easily have been used to steal other sensitive personal data via the AI assistants. SRLabs explicitly points out that the app could have been programmed to access credit card information. It isn’t out of the question that malicious entities could combine the two types of attack to take matters further either.

For instance, in more complex attacks, it might access the user’s password and then go silent again. Later on, it might attempt a similar method to access credit card information. Meanwhile, a bad actor might record through the mics. The stolen information could be used along with personally identifiable data and in combination with other tracking methods to further garner details about the user’s activity.

Google and Amazon aren’t resting on the vulnerabilities with their home automation and assistant platforms though.

Amazon blocked the Alexa Skill and says it has put “mitigations in place” to prevent this type of vulnerability from being taken advantage of. Specifically, Amazon says its system now monitors for this type of behavior. Apps that exhibit the behavior will be rejected before becoming publically available or taken down if the behavior is recognized in available Alexa Skills.

Google claims that it has removed the apps added by SRLabs. But it also says that new mechanisms are being put in place to recognize malicious behavior of that type. Furthermore, company employees have reportedly claimed that it is conducting a review of third-party actions. Those that may be compromised will be paused and either removed or reinstated once the review is finished.

Both companies have stressed that they won’t ask for personal data such as passwords via their respective AI interfaces. It’s worth noting too, that neither requires interaction for updates. Those happen automatically.