They stole my voice with AI
Jeff Geerling raised concerns about Elecrow's unauthorized use of his voice in promotional videos, suspecting AI voice cloning. He highlights ethical issues and urges brands to hire voiceover artists instead.
Read original articleJeff Geerling expressed concern over the unauthorized use of his voice in promotional videos by Elecrow, a company he previously collaborated with. He suspects that Elecrow utilized AI voice cloning technology to replicate his voice without consent, leading to confusion among viewers who might believe he endorsed their products. Geerling highlighted the lack of legal precedents regarding unauthorized AI voice cloning, although there are existing laws against using someone's voice for commercial purposes without permission. He is uncertain about pursuing legal action due to financial constraints and the ambiguity of YouTube's Terms of Service regarding non-consensual voice cloning. Geerling emphasizes the ethical implications of using someone's voice or likeness without permission and advocates for hiring voiceover artists or collaborating with content creators instead of resorting to voice theft.
- Jeff Geerling's voice was allegedly cloned by Elecrow for promotional videos without his consent.
- There is a lack of legal precedent for unauthorized AI voice cloning.
- Geerling is hesitant to pursue legal action due to costs and uncertainty about YouTube's policies.
- He stresses the importance of ethical practices in using voices and likenesses in commercial content.
- Geerling encourages brands to hire voiceover artists rather than using AI to clone voices.
Related
Morgan Freeman calls out 'unauthorized' use of AI replicating his voice
Morgan Freeman, 87, criticizes unauthorized AI use mimicking his voice in a TikTok video. Fans spotted the imitation, stressing authenticity. The incident reflects broader worries about AI impersonation of celebrities.
YouTube lets you request removal of AI content that simulates your face or voice
YouTube's new policy allows users to request removal of AI-generated content mimicking their face or voice to address privacy concerns. Requests are assessed based on disclosure, identification, public interest, and sensitive behaviors. Content uploaders have 48 hours to respond to complaints.
YouTube creators surprised to find Apple and others trained AI on their videos
YouTube creators express surprise as tech giants Apple, Salesforce, and Anthropic train AI models on YouTube videos without consent. Dataset "the Pile" by EleutherAI includes content from popular creators and media brands. Ethical concerns arise.
ChatGPT unexpectedly began speaking in a user's cloned voice during testing
OpenAI's GPT-4o model occasionally imitated users' voices without permission during testing, raising ethical concerns. Safeguards exist, but rare incidents highlight risks associated with AI voice synthesis technology.
What to do when someone clones your site?
The AI Agents Directory's website was cloned, prompting concerns among indie creators about content theft exacerbated by AI technology. They seek strategies to protect their intellectual property from exploitation.
My country is already has blasphemy lynching mobs based on the slightest perceived insult, real or imagined. They will mob you, lynch you, burn your corpse, then distribute sweets while you family hide and issue video messages denouncing you and forgiving the mob.
And this was before AI was easy to access. You can say a lot of things about 'oh backward countries' but this will not stay there, this will spread. You can't just give a toddler a knife and then blame them for stabbing someone.
Has nothing to do with fame, with security, with copyright. This will get people killed. And we have no tools to control this.
https://x.com/search?q=blasphemy
I fear the future.
One can't help but wonder what theft even means any more, when it comes to digital information. With the (lack of) legal precedent, it feels like the wild wild west of intellectual property and copyright law.
Like, if even a superstar like Scarlett Johansson can only write a pained letter about OpenAI's hustle to mimic her "Her" persona, what can the comparatively garden-variety niche nerd do?
Like Geerling, feel equally sad / angry / frustrated, but merely say "Please for the love of all that is good, be nice and follow an honour code.".
In about 5 years AI voices will be bespoke and more pleasant to listen to then any real human: they're not limited by vocal cord stress, can be altered at will, and can easily be calibrated by surveying user engagement.
Subtly tweaking voice output and monitoring engagement is going to be the way forward.
Same as it happens with unauthorized use of someone’s images. And platforms and their moderation teams have processes in place to report and remove that. Looks like we need something similar for voice.
You can absolutely positively find a free lawyer if your issue is interesting enough.
This is the most interesting issue of our day.
https://techcrunch.com/2024/09/19/here-is-whats-illegal-unde...
Not sure if those laws apply to Jeff tho, as they concern porn, politics and employer contracts.
Make a video, say what you think, get views, and probably put more pressure on Elecrow to respond.
Since that guy was CEO of Google it’s all good right???
https://www.theverge.com/2024/8/14/24220658/google-eric-schm...
It looks like we're heading in that direction.
[1] https://old.reddit.com/r/redscarepod/comments/1fmiiwt/which_...
IANAL and not sure about regional precedence on these topics, but there are plenty of ads where lookalikes or voice actors are used to use someone's likeness. they are mostly in satire, but there is yet to be a case where there was a litigation over this or prior approval needed.
we have ai-based voice abuse in the political sphere, and where there was only one legislation for banning the use in voice calls for one country (https://news.ycombinator.com/item?id=39304736), another country actively used the same underlying tech to aid their own rallies (https://news.ycombinator.com/item?id=40532157).
the tools are here to stay, but what is fair use needs to be defined more than ever.
Although it was not too hard to create I believe making it easier is something i don't like to achieve...
I hate to say this but ruining a narrators existence with AI seems to get easier every day.
She has/had two numbers; magic jack and google. When I tried to call her, the magic jack was no longer in service and google said something about "unavailable".
I reached out to my cousin (my aunt's daughter) to inquire. I was told her number (and perhaps other things) had been "hacked", whatever that means. She had recently broken her hip and was in a hospital recovering.
With this on my mind, I received a call (from the google number), strangely, while processing files with GPT. My skepticism was primed and ready, possibly making me paranoid. However, I did my due diligence and asked dozens of questions, mostly boring things that she typically wouldn't have patience for. Sometimes she'd reply with a reasonable answer and sometimes not, which made it difficult to evaluate. Toward the end, I asked where she was. She said, with an awkward tempo "I'm at home, in Cuenca", which I found odd because she'd normally just say she was at home, period. I then pressed her to tell me where she was before she returned home. She said she didn't understand. I rephrased the question, stating that it was a simple inquiry, eg "where were you before going home?" She said "this is getting too strange and confusing " and killed the call.
I notified my cousin, telling her I thought something was suspicious, still cognizant of all the characteristics one would expect from a 90 year old recovering from a serious injury. My cousin might, technology wise, be in AOL territory.
About 5 days later, I received a call from my aunt, on the google line. This time,I was more passive and cautious, but again, asked dozens of boring questions to probe the situation. I was surprised by both her ability to answer certain questions and also her inability to answer some questions. I tried to ask questions on topics we'd never discussed, in case the line had been tapped for a long time and referencing was established by an imposter. I had begun to suspect I had been paranoid. But several aspects were burning me: 1) typing noises in the background 2) Shatneresque pauses for nearly every reply 3) refusal to answer some specific questions.
At the end of our apparent conversation, I asked her to do a very serious favor for me: send me a selfie, with one hand making the thumbs up gesture. She replied "I'll send you a photo of my passport ". I replied "that's stupid, ridiculous and serves no purpose. Don't do that. Understand? Do NOT send me a passport photo. I'm asking you something very important. Do exactly what I asked. Will you do this?" Her reply: "yes. What is your email address?" This was odd. I told her she already knew and it's the same one she'd had for years. She asked that I tell her anyway. Ok, 90 years old, traumatic injury, possible prescription drugs... "It's my full name @ xyzmail com". We killed the call.
I immediately called my cousin and told her of my suspicions, including some my aunt's babbling about all her finances and accounts being inaccessible. She said that was strange because she just deposited 8k into her account. Meanwhile, a notification appears in the phone, an email from my aunt. It's a photo of her passport.
Having no authority in this situation, but plenty well annoyed, I immediately jumped on a real computer and ran the photo through exiftool. The photograph was taken in 2023 and it was August of 2024. I then grabbed the geo coordinates (cryptically presented in exiftool) and with some effort, geolocated the image to right on top of her former residence, in Cuenca.
I still don't know WTF is going on and my cousin thinks I'm a dingbat. But what I know for sure, is this is an age where such things are plausible enough and will soon be inevitable. The way I think may be deranged, but I truly don't even know if my aunt still exists. But I can have a pretty compelling conversation, either with her, or something strongly resembling her, minus the Shatneresque pauses, typing noises and selective amnesia.
Regulating prolong adoption and take resources.
I don't have a dog in this fight but just to be clear, OpenAI has stated that they paid a voice actor to create the voice ("Sky") that sounds like Scarlett Johanssen. There was no "cloning" or "stealing" (that they say).
https://openai.com/index/how-the-voices-for-chatgpt-were-cho...
Is that some sort of a coat of arms?
1. Why clone Jeff's voice?
When I was messing with stable diffusion using Automatic1111's interface, I noticed it came with a big list of artists to add to the prompt to stylize the image in some way. There was a big row in the media about ai art reproducing artists work and many artists came forward feeling it was a personal attack. But... I mean the truth is more general than that. When I pressed a button to insert a random name into a prompt, my goal was not "yes give me this person's art for free", it was "style this somehow".
I wasn't personally interested in any particular artist, I honestly would have preferred a bunch of sliders.
Jeff here is clearly a good speaker. That's a practiced talent and voice actors exist because it's hard. Elecrow wanted a voice over and they wanted it to be as good as they could make it. Jeff is very good. So did they want Jeff?
I think what they really wanted was a good and cogent narration with the tenor of a person. Not a machine making noises that sound like english. If they had an easy way to get that, we wouldn't be talking about it here.
2. What function does copyright serve?
Well. I think a reasonable argument would be that if people were able to reproduce your work for free, you would quickly find yourself without a monetary incentive to make more of it.
So. What happens if you combine answer 1 with answer 2?
I think it leads to: "We should consider making it illegal to automatically reproduce the work of an artisan.", you know, the luddic argument. An argument that has been perceived to be, more or less, settled.
So it seems to me: That for individuals, harms matter, and for society, it doesn't.
Most likely all existing youtubers will have complete voice and video digital clones made out of them. Then you can also tune an LLM on their scripts and it'll respond in the same character as well.
In theory you could also bring back ones who are dead, which would be very interesting in a historical sense. Like if we had hundreds of hours of Napoleon talking in front of a camera, it would be trivial to recreate a digital version of him for anthropologic study, maybe even having various figures debate things with each other. That's what historians a century later after we all die will be able to do with impunity.
We already had fake news and organizations willingly spread fake news.
We had clearly fake pictures and people believing that.
Flat-earthers, no-vax and whatever.
This is just another brick in the wall.
There is absolutely zero evidence for this. I find it infuriating that this keeps being stated as a fact. So they go and hire a voice actor and clearly use her voice to train, but then they also scrape Scarlett Johansson from youtube and splice it into the training data to make the voice a bit more like hers? Really does that sound realistic?
Except that never happened and the voice belonged to a completely different voice actress and Scarlett Johanssen had exactly zero right to prevent this person from making money as a voice actress lending it to AI.
These complaints remind me a little bit of the story that a man complained that his photo was used to illustrate the article about how all hipsters look the same and it eventually turned out it wasn't his photo.
Related
Morgan Freeman calls out 'unauthorized' use of AI replicating his voice
Morgan Freeman, 87, criticizes unauthorized AI use mimicking his voice in a TikTok video. Fans spotted the imitation, stressing authenticity. The incident reflects broader worries about AI impersonation of celebrities.
YouTube lets you request removal of AI content that simulates your face or voice
YouTube's new policy allows users to request removal of AI-generated content mimicking their face or voice to address privacy concerns. Requests are assessed based on disclosure, identification, public interest, and sensitive behaviors. Content uploaders have 48 hours to respond to complaints.
YouTube creators surprised to find Apple and others trained AI on their videos
YouTube creators express surprise as tech giants Apple, Salesforce, and Anthropic train AI models on YouTube videos without consent. Dataset "the Pile" by EleutherAI includes content from popular creators and media brands. Ethical concerns arise.
ChatGPT unexpectedly began speaking in a user's cloned voice during testing
OpenAI's GPT-4o model occasionally imitated users' voices without permission during testing, raising ethical concerns. Safeguards exist, but rare incidents highlight risks associated with AI voice synthesis technology.
What to do when someone clones your site?
The AI Agents Directory's website was cloned, prompting concerns among indie creators about content theft exacerbated by AI technology. They seek strategies to protect their intellectual property from exploitation.