Hackers steal your voice

 

2672889152_0fa1bf6ebc_bSecurity experts are looking into ways attackers can fool voice-based security systems by impersonating a person’s voice.

A team at the University of Alabama, Birmingham (UAB), has found that using readily available voice morphing software, hackers are able to administer voice imitation attacks to breach automated and human authentication systems.

The research was presented last week at the European Symposium on Research in Computer Security (ESORICS) in Vienna, Austria.

Nitesh Saxena said that people rely on the use of their voices all the time, it becomes a comfortable practice to base security systems around them.

“What they may not realize is that level of comfort lends itself to making the voice a vulnerable commodity. People often leave traces of their voices in many different scenarios. They may talk out loud while socializing in restaurants, giving public presentations or making phone calls, or leave voice samples online,” he added.

“Voice is a characteristic unique to each person, it forms the basis of the authentication of the person, giving the attacker the keys to that person’s privacy.”

Hackers can easily record a voice clip if they are within close proximity of their target, over the phone via a spam call by using audio snippets found online.

Advanced voice morphing software can also create an extremely close imitation of a person’s voice from a limited number of audio samples, allowing an attacker to speak any message in the victim’s voice.

With a few minutes’ worth of audio in a victim’s voice would lead to the cloning of the victim’s voice.

The researchers tested voice-biometrics, or speaker-verification used to secure systems, such as online banking, smartphone PIN locks and government access control. They also looked at the impact of stealing voices to imitate humans in conversation, such as morphing celebrity voices and posting snippets online, leaving fake voice messages, and creating false audio evidence in court.

The majority of advanced voice-verification algorithms were trumped by the researchers’ attacks, with only a 10-20 per cent rate of rejection. Humans told to verify voice samples only rejected about half of the morphed clips.