Tuesday, April 16, 2024

OpenAI claims it might probably clone a voice from 15 seconds of audio • The Register

Must read


OpenAI’s newest trick wants simply 15 seconds of audio of somebody talking to clone that individual’s voice – however don’t fret, no must look backstage, the biz needs everybody to know it is not going to launch this Voice Engine till it may be certain the potential for mischief has been managed. 

Described as being a “small mannequin” that makes use of a 15-second clip and a textual content immediate to generate natural-sounding speech resembling the unique vocalist, OpenAI stated it is already been testing the system with a number of “trusted companions.” It has offered purported samples of Voice Engine’s capabilities in advertising and marketing bumf emitted on the finish of final month. 

Based on OpenAI, Voice Engine can be utilized to do issues like present studying help, translate content material, help non-verbal folks, assist medical sufferers who’ve misplaced their voices regain the flexibility to talk in their very own voice and increase entry to companies in distant settings. All these use circumstances are demoed and have been a part of the work OpenAI has been doing with early companions. 

Information of the existence of Voice Engine, which OpenAI stated was developed in late 2022 to function the tech behind ChatGPT Voice, Learn Aloud, and its text-to-speech API, comes as issues over voice cloning have reached a fever pitch of late.

One of the vital headline-grabbing voice cloning tales of the yr got here from the New Hampshire presidential major within the US, throughout which AI-generated robocalls of President Biden went out urging voters to not take part within the day’s voting. 

Since then the FCC has formally declared AI-generated robocalls to be unlawful, and the FTC has issued a $25,000 bounty to solicit concepts on tips on how to fight the rising risk of AI voice cloning. 

Most lately, former US Secretary of State, senator and First Girl Hillary Clinton has warned that the 2024 election cycle might be “floor zero” for AI-driven election manipulation. So why come ahead with one other doubtlessly trust-shattering know-how within the midst of such a debate? 

“We hope to begin a dialogue on the accountable deployment of artificial voices, and the way society can adapt to those new capabilities,” OpenAI stated.

“Based mostly on these conversations and the outcomes of those small scale checks, we are going to make a extra knowledgeable choice about whether or not and tips on how to deploy this know-how at scale,” the lab added. “We hope this preview of Voice Engine each underscores its potential and likewise motivates the necessity to bolster societal resilience towards the challenges introduced by ever extra convincing generative fashions.” 

To help in stopping voice-based fraud, OpenAI stated it’s encouraging others to part out voice-based authentication, discover what might be achieved to guard people towards such capabilities, and speed up tech to trace the origin of audiovisual content material “so it is at all times clear if you’re interacting with an actual individual or with an AI.” 

That stated, OpenAI additionally appears to just accept that, even when it would not find yourself deploying Voice Engine, another person will possible create and launch an identical product – and it won’t be somebody as reliable as them, you already know. 

“It is necessary that folks around the globe perceive the place this know-how is headed, whether or not we in the end deploy it broadly ourselves or not,” OpenAI stated. 

So contemplate this an oh-so pleasant warning that, even when OpenAI is not the explanation, you may’t belief all the things you hear on the web these days. ®



Supply hyperlink

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article