BLOG

BLOG

A Dolphin attack is a term that has been given to the method of accessing a smartphone without the users’ consent by executing ultrasonic commands.

Our Experts Explain

Digital assistants, like Google’s Assistant, Apple’s Siri and Amazon’s Alexa, are becoming more and more popular as the world embraces this new Artificial Intelligence technology, built to make our lives easier. The big companies are competing for who has the most effective product and one way to do this is to make the digital assistants more personalised and tailored to individual users. To achieve this, however, a great deal of personal information needs to be shared with the device, and this can be a goldmine for cyber-criminals.

Figure 1: Image sourced from www.bbc.com

Researchers claim to have found a way to hijack the voice-controlled assistants, by using ultrasonic audio commands that dolphins can hear, but humans cannot. On most smartphones, the digital assistant is set up to listen for a “wake word”. For Google, the assistant takes orders once a person says, “OK Google”, while Apple’s assistant responds to “Hey Siri” and Amazon’s to “Alexa”.

Chinese researchers from Zhejiang University decided to create a program that could translate a standard human voice command and broadcast it in ultrasonic frequencies (over 20 kHz). These frequencies are too high for humans to hear, but they wanted to see if they were still audible to smartphones. To play these frequencies, the program needed very basic equipment, such as a smartphone, amplifier, ultrasonic transducer and battery, costing a mere $3 for the parts.

In total the researchers identified 16 devices which can be hacked using the Dolphin attack. The devices include: iPhone 4 to iPhone 7 Plus, iPad mini 4, MacBook, Apple Watch, Nexus 7, Samsung Galaxy S6, Huawei Honor 6, Amazon Echo and the Audi Q3.

During tests, from a few feet away, they were able to make calls, launch FaceTime, take photographs, visit websites, activate a phone’s airplane mode and even mess with the voice-controlled navigation system in an Audi car. Researchers suggested that an attacker could embed hidden ultrasonic commands in online videos, or broadcast them in public while near a victim.

How likely is an attack?

Whilst it sounds scary and potentially destructive, the likelihood of an attack is diminished by the following provisos that need to be met for an attack to be successful:

  • The broadcasting device has to be within five to six feet of the victim and needs a reasonably quiet location.
  • The smartphone needs to be unlocked before allowing any sensitive activity such as viewing websites.
  • The attack would not work on a device that had been trained to respond to only one person’s voice (which is a feature Google’s Assistant offers).
  • Both Apple and Google allow “wake words” to be turned off, so the assistants can’t be activated without permission.

Both Google and Amazon released statements saying that they take user privacy and security very seriously, and are reviewing the claims made.

Researchers advise that a solution for this hack is for device makers to programme their Artificial Intelligence assistants to ignore frequencies above 20 kHz or cancel out any frequencies that humans cannot hear.

Dr Steven Murdoch, a cyber-security researcher at University College London says he “expects the smart speaker vendors will be able to do something about it and ignore the higher frequencies.” The Chinese research team also suggested that smart speakers could use microphones designed to filter out sounds above 20 kHz to prevent the attack.

Until that happens, if you are feeling paranoid, keep your voice assistants deactivated.