Linkedin Share

Amazon Product Reveal Hailed as 'Monstrosity'; Creepy Innovation Can Speak to You in Voice of Dead Relatives

Linkedin Share

Amazon’s Alexa now has the ability to imitate multiple voices, which the company said can be used to have it speak in the voices of dead relatives.

The feature was showcased at Amazon’s annual MARS conference last month in Las Vegas, according to The Verge.

It’s unclear if the feature will ever be made public.

A video at the MARS conference showed a child asking Alexa to read a story in the voice of his dead grandmother.

“As you saw in this experience, instead of Alexa’s voice reading the book, it’s the kid’s grandma’s voice,” said Rohit Prasad, Amazon’s head scientist for Alexa AI.

Massive Migrant Caravan Marches Toward US with LGBT Flags Flying as Mexican President Snubs Biden at Summit

Prasad talked up the concept, saying that giving “human attributes” to AI was important “in these times of the ongoing pandemic, when so many of us have lost someone we love.”

“While AI can’t eliminate that pain of loss, it can definitely make their memories last,” Prasad said.

Would you want this capability in your house?

Some on social media were not sold. One user tweeted: “Creepy.” “Morbid.” “Monstrosity.” Amazon Alexa Creeps Out Internet With Voices of the Dead.”

The Courts Are Coming for Trump: Day After Mar-a-Lago Raid, Federal Judges Make Move

Microsoft has set limits on how voices can be created and used, according to NBC.

“This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners,” Natasha Crampton, who heads Microsoft’s AI ethics division, wrote in a blog post.

Amazon said one minute of recorded audio is enough to do the trick. That’s a massive change from current technology in which Amazon has had celebrities spend hours recording audio, according to Variety.

Prasad said engineers were able to cut down the time “by framing the problem as a voice-conversion task, and not a speech-generation task.”

This article appeared originally on The Western Journal.

Submit a Correction →

Linkedin Share