Your daily selection of the hottest trending tech news!
According to Fast Company (This article and its images were originally posted on Fast Company September 28, 2018 at 07:02AM.)
Scientists at the Ruhr-Universitaet in Bochum, Germany, have discovered a way to hide inaudible commands in audio files–commands that, while imperceptible to our ears, can take control over voice assistants. According to the researchers behind the technology, the flaw is in the very way AI is designed.
It’s part of a growing area of research known as “adversarial attacks,” which are designed to confuse deep neural networks–usually visually, as Co.Design has covered in the past–leaving them potentially vulnerable to attacks by bad-faith actors on the technology and infrastructure in our world that depends on AI to function.
In this case, the system being “attacked” by researchers at the Ruhr-Universität Bochum are personal assistants, like Alexa, Siri, or Cortana. According to Professor Thorsten Holz from the Horst Görtz Institute for IT Security, their method, called “psychoacoustic hiding,” shows how hackers could manipulate any type of audio wave–from songs and speech to even bird chirping–to include words that only the machine can hear, allowing them to give commands without nearby people noticing. The attack will sound just like a bird’s call to our ears, but a voice assistant would “hear” something very different.
To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!