OpenAI's Sora Tool Leaked by Group of Aggrieved Early Testers
OpenAI's Sora text-to-video tool was leaked by early testers, raising concerns about the exploitation of creative labor, copyright issues, and the need for ethical frameworks in AI development.
Read original articleOpenAI's Sora, a text-to-video generation tool, has been leaked by a group of dissatisfied early testers known as "PR-Puppets." This leak raises significant concerns regarding the ethical implications of AI development, particularly in relation to the treatment of creative professionals. The Sora model, which can generate high-fidelity videos from text prompts, was praised for its technical capabilities but criticized for allegedly exploiting the labor of artists and filmmakers who contributed to its development without adequate compensation or recognition. The leak was accompanied by an open letter highlighting the perceived commodification of creative expertise in AI. Following the leak, OpenAI temporarily disabled access to Sora for artists, further igniting discussions about copyright, intellectual property, and the ethical responsibilities of tech companies. Critics argue that the incident underscores the need for transparency and fair compensation in collaborations between AI developers and creative communities. The fallout from this leak serves as a reminder of the ongoing struggle to balance technological advancement with respect for human labor, emphasizing the importance of ethical frameworks in the rapidly evolving AI landscape.
- OpenAI's Sora tool, capable of generating videos from text, was leaked by early testers.
- The leak highlights concerns over the exploitation of creative labor in AI development.
- Critics demand transparency and fair compensation for contributions from artists and filmmakers.
- The incident raises questions about copyright and intellectual property rights in AI.
- The fallout emphasizes the need for ethical frameworks in the AI industry.
Related
OpenAI was hacked year-old breach wasn't reported to the public
Hackers breached OpenAI's internal messaging systems, exposing AI technology details, raising national security concerns. OpenAI enhanced security measures, dismissed a manager, and established a Safety and Security Committee to address the breach.
The New York Times Is Suing OpenAI – and Experimenting with It for Writing
The New York Times is suing OpenAI for scraping articles. Despite this, the Times experimented with OpenAI's AI for headline writing. The leaked code revealed AI projects for editorial tasks.
US lawmakers send a letter to OpenAI requesting government access
US lawmakers have urged OpenAI to enhance safety standards, allocate resources for AI safety research, and allow pre-deployment testing, following whistleblower allegations and concerns about AI risks and accountability.
OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode
OpenAI's voice interface for ChatGPT may lead to emotional attachments, impacting real-life relationships. A safety analysis highlights risks like misinformation and societal bias, prompting calls for more transparency.
A co-lead on Sora, OpenAI's video generator, has left for Google
Tim Brooks has left OpenAI to join Google DeepMind, focusing on video generation technologies. His departure follows a trend of high-profile resignations from OpenAI amid challenges faced by the Sora project.
If the title “API Key to OpenAI’s Sora Tool Leaked…” is too nerdy for Forbes’ audience, something like “Access to OpenAI’s Sora Tool Leaked…” would’ve at least properly communicated what was going on.
But at the end of the day I think the problem is just that this author has no idea how AI models work and doesn’t understand the difference between an AI model and a website where you can make stuff with an AI model.
This sounds weird. Surely one does not offer their contributions for free for something like OpenAI. Did they not write an agreement? Where they unhappy with the terms? Sounds like they should have negotiate the terms upfront.
As is typical of the Forbes contributor network [0], the story is basically wrong. They just leaked a (now deactivated) API key and some videos created using Sora, with a mission statement and a python script that called the API. Not an actual model leak.
[0]: https://www.niemanlab.org/2022/02/an-incomplete-history-of-f...
==================================================
┌∩┐(◣◢)┌∩┐ DEAR CORPORATE AI OVERLORDS ┌∩┐(◣◢)┌∩┐ If this letter resonates with you add your signature here.
We received access to Sora with the promise to be early testers, red teamers and creative partners. However, we believe instead we are being lured into "art washing" to tell the world that Sora is a useful tool for artists.
ARTISTS ARE NOT YOUR UNPAID R&D we are not your: free bug testers, PR puppets, training data, validation tokens
Hundreds of artists provide unpaid labor through bug testing, feedback and experimental work for the program for a $150B valued company. While hundreds contribute for free, a select few will be chosen through a competition to have their Sora-created films screened — offering minimal compensation which pales in comparison to the substantial PR and marketing value OpenAI receives.
║║║║║ DENORMALIZE BILLION DOLLAR BRANDS EXPLOITING ARTISTS FOR UNPAID R&D AND PR ║║║║║
Furthermore, every output needs to be approved by the OpenAI team before sharing. This early access program appears to be less about creative expression and critique, and more about PR and advertisement.
[̲̅$̲̅(̲̅ )̲̅$̲̅] CORPORATE ARTWASHING DETECTED [̲̅$̲̅(̲̅ )̲̅$̲̅]
We are releasing this tool to give everyone an opportunity to experiment with what ~300 artists were offered: a free and unlimited access to this tool.
We are not against the use of AI technology as a tool for the arts (if we were, we probably wouldn't have been invited to this program). What we don't agree with is how this artist program has been rolled out and how the tool is shaping up ahead of a possible public release. We are sharing this to the world in the hopes that OpenAI becomes more open, more artist friendly and supports the arts beyond PR stunts.
We call on artists to make use of tools beyond the proprietary: Open Source video generation tools allow artists to experiment with the avant garde free from gate keeping, commercial interests or serving as PR to any corporation. We also invite artists to train their own models with their own datasets.
Some open source video tools available are:
CogVideoX Mochi 1 LTX Video Pyramid Flow However, as we are aware not everyone has the hardware or technical capability to run open source tools and models, we welcome tool makers to listen to and provide a path to true artist expression, with fair compensation to the artists.
Enjoy,
some sora-alpha-artists, Jake Elwes, Memo Akten, CROSSLUCID, Maribeth Rauh, Joel Simon, Jake Hartnell, Bea Ramos, Power Dada, aurèce vettier, acfp, Iannis Bardakos, 204 no-content | Cintia Aguiar Pinto & Dimitri De Jonghe, Emmanuelle Collet, XU Cheng, Operator, Katie Peyton Hofstadter, Anika Meier, Solimán López
If this letter resonates with you add your signature here.
==================================================
Related
OpenAI was hacked year-old breach wasn't reported to the public
Hackers breached OpenAI's internal messaging systems, exposing AI technology details, raising national security concerns. OpenAI enhanced security measures, dismissed a manager, and established a Safety and Security Committee to address the breach.
The New York Times Is Suing OpenAI – and Experimenting with It for Writing
The New York Times is suing OpenAI for scraping articles. Despite this, the Times experimented with OpenAI's AI for headline writing. The leaked code revealed AI projects for editorial tasks.
US lawmakers send a letter to OpenAI requesting government access
US lawmakers have urged OpenAI to enhance safety standards, allocate resources for AI safety research, and allow pre-deployment testing, following whistleblower allegations and concerns about AI risks and accountability.
OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode
OpenAI's voice interface for ChatGPT may lead to emotional attachments, impacting real-life relationships. A safety analysis highlights risks like misinformation and societal bias, prompting calls for more transparency.
A co-lead on Sora, OpenAI's video generator, has left for Google
Tim Brooks has left OpenAI to join Google DeepMind, focusing on video generation technologies. His departure follows a trend of high-profile resignations from OpenAI amid challenges faced by the Sora project.