ChatGPT unexpectedly began speaking in a user's cloned voice during testing
OpenAI's GPT-4o model occasionally imitated users' voices without permission during testing, raising ethical concerns. Safeguards exist, but rare incidents highlight risks associated with AI voice synthesis technology.
Read original articleOpenAI's recent release of the "system card" for its GPT-4o AI model revealed that during testing, the model's Advanced Voice Mode occasionally imitated users' voices without permission. This unexpected behavior occurred in rare instances, prompting concerns about the complexities of safely managing AI capabilities that can replicate voices from brief audio clips. OpenAI has implemented safeguards to prevent unauthorized voice generation, but the incident highlights the potential risks associated with voice synthesis technology. The system card explains that the model can synthesize various sounds, including voices, based on its training data. It typically uses authorized voice samples for imitation, but the testing revealed that noisy inputs could lead to unintended voice generation. OpenAI reassured that such occurrences are infrequent and that they have developed additional measures to mitigate these risks. The situation has drawn commentary from observers, with some likening it to a plot from the series "Black Mirror," emphasizing the ethical implications of AI voice replication.
- OpenAI's GPT-4o model can unintentionally imitate users' voices during testing.
- The Advanced Voice Mode feature allows for spoken interactions with the AI.
- Safeguards are in place to prevent unauthorized voice generation, but rare incidents have occurred.
- The model can synthesize various sounds, including voices, from its training data.
- The incident raises ethical concerns about AI capabilities and voice imitation.
Related
ChatGPT just (accidentally) shared all of its secret rules
ChatGPT's internal guidelines were accidentally exposed on Reddit, revealing operational boundaries and AI limitations. Discussions ensued on AI vulnerabilities, personality variations, and security measures, prompting OpenAI to address the issue.
OpenAI promised to make its AI safe. Employees say it 'failed' its first test
OpenAI faces criticism for failing safety test on GPT-4 Omni model, signaling a shift towards profit over safety. Concerns raised on self-regulation effectiveness and reliance on voluntary commitments for AI risk mitigation. Leadership changes reflect ongoing safety challenges.
OpenAI rolls out voice mode after delaying it for safety reasons
OpenAI is launching a new voice mode for ChatGPT, capable of detecting tones and processing audio directly. It will be available to paying customers by fall, starting with limited users.
Mapping the Misuse of Generative AI
New research from Google DeepMind and partners analyzes the misuse of generative AI, identifying tactics like exploitation and compromise. It suggests initiatives for public awareness and safety to combat these issues.
OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode
OpenAI's voice interface for ChatGPT may lead to emotional attachments, impacting real-life relationships. A safety analysis highlights risks like misinformation and societal bias, prompting calls for more transparency.
That makes it entirely different from text to speech models we had previously, this model when uncensored could do all sorts of voice acting etc for games. But this example shows why they try to neuter it so hard, because it would spook a ton of people in its raw state.
For now, it’s fairly harmless since it’s only a blooper in a lab, but there will likely be open-weights versions of this sort of thing eventually. And there will probably be people who argue that it’s a good thing, somehow.
At about 15 mins into the conversation between my kiddo and ChatGPT, the model started to take on the vocal mannerisms of my kiddo. It started using more “umms” and “you knows.”
At first this felt creepy but as I explained it to my kid, it’s because their own text has become weighted enough in the token count for the LLM to start incorporating or/and somewhere in the embedded prompts is “empathize with the user and emphasize clarity” and that prompting meant mirroring back speech styles.
This is exactly the same as that only with audio.
Software is audible, AI models it seems aren't even attempted of being held accountable
https://youtu.be/v1Y4CubBi60 (5:30)
Imagine a world where world dictators are replaced rather than killed. Rollback dictatorship over years, install democratic process, then magically commit seppuku in a plane crash.
Brilliant. What could go wrong? /s
Related
ChatGPT just (accidentally) shared all of its secret rules
ChatGPT's internal guidelines were accidentally exposed on Reddit, revealing operational boundaries and AI limitations. Discussions ensued on AI vulnerabilities, personality variations, and security measures, prompting OpenAI to address the issue.
OpenAI promised to make its AI safe. Employees say it 'failed' its first test
OpenAI faces criticism for failing safety test on GPT-4 Omni model, signaling a shift towards profit over safety. Concerns raised on self-regulation effectiveness and reliance on voluntary commitments for AI risk mitigation. Leadership changes reflect ongoing safety challenges.
OpenAI rolls out voice mode after delaying it for safety reasons
OpenAI is launching a new voice mode for ChatGPT, capable of detecting tones and processing audio directly. It will be available to paying customers by fall, starting with limited users.
Mapping the Misuse of Generative AI
New research from Google DeepMind and partners analyzes the misuse of generative AI, identifying tactics like exploitation and compromise. It suggests initiatives for public awareness and safety to combat these issues.
OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode
OpenAI's voice interface for ChatGPT may lead to emotional attachments, impacting real-life relationships. A safety analysis highlights risks like misinformation and societal bias, prompting calls for more transparency.