Amazon is making Alexa speak in the voice of a missing loved one, and it worries security experts.
Connected speakers and their accompanying voice-activated digital assistants allow for an impressive amount of action. The only limit seems to be the imagination of the developers. If in most cases the new features are very useful, then some of them are a little more questionable or at least weird. This is a case of Amazon’s latest idea for its Alexa voice assistant, which wants to let it play voices.
Amazon makes Alexa speak in the voice of a missing loved one
During the Amazon Re:Mars conference, Alexa Vice President Rohit Prasad showed off a brand new capability of the American giant’s voice-assisted digital assistant: the ability to imitate voice. So far, we have no indication of when this feature will be available, or even if it will ever become popular.
Ironically, Amazon introduced this new feature as a way to honor our departed loved ones. A US firm has released a video of Alexa reading her recently deceased grandmother’s voice to a child. Rohit Prasad explained that the company is looking for ways to make its artificial intelligence as personalized as possible. “While AI can’t get rid of the pain of loss, it can extend memory.”An Amazon spokesperson told Engadget that this new skill can create a synthetic voice fingerprint after just a minute of practice with the voice of the person it is supposed to reproduce.
and it worries security experts
Security experts have long expressed concern that such audio tools, which use text-to-speech technologies to create synthetic voices, could pave the way for new types of fraud. Voice cloning software has already done a lot of harm, including a 2020 incident in the United Arab Emirates where scammers tricked a bank manager into transferring $35 million after he posed as the manager out loud. That being said, deepfake sound crimes are still not widespread and the tools available are still relatively simple.