The models who found their likenesses had been used in AI propaganda
Models discovered their images were used in AI-generated propaganda videos without consent, prompting backlash against Synthesia. This incident raised ethical concerns and led to proposed legislation for protection against unauthorized likeness use.
Read original articleModels have discovered that their likenesses were used in AI-generated propaganda videos without their consent, particularly in support of authoritarian regimes like that of Burkina Faso's President Ibrahim Traoré. The London-based company Synthesia, which specializes in creating lifelike AI videos, has faced backlash after its technology was misused to produce deepfake content. Models such as Mark Torres and Connor Yeates expressed feelings of violation and anxiety upon learning their images were used to promote military rule and misinformation. Synthesia, which has achieved "unicorn" status with significant investments, claims to have banned the accounts responsible for the misuse and improved its moderation processes. However, the models were unaware of the videos until contacted by the media, raising concerns about the lack of safeguards and the potential for reputational damage. The situation has prompted discussions about the ethical implications of AI in creative industries, leading to strikes by actors in the U.S. and proposed legislation aimed at protecting individuals from unauthorized use of their likenesses. Despite Synthesia's assurances, the models feel betrayed and worry about the long-term consequences of their images being associated with propaganda.
- Models' likenesses were used in AI-generated propaganda without consent.
- Synthesia's technology has been misused, leading to significant backlash.
- Affected individuals reported feelings of violation and anxiety.
- The incident has sparked discussions on ethical AI use in creative industries.
- Proposed legislation aims to protect individuals from unauthorized likeness use.
Related
Apple trained AI models on YouTube content without consent
Tech giants, like Apple, used YouTube video subtitles without creators' consent for AI training. Concerns over legality and ethics arise as companies leverage third-party datasets, impacting creators and raising AI training ethics issues.
YouTube creators surprised to find Apple and others trained AI on their videos
YouTube creators express surprise as tech giants Apple, Salesforce, and Anthropic train AI models on YouTube videos without consent. Dataset "the Pile" by EleutherAI includes content from popular creators and media brands. Ethical concerns arise.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn that generative AI contributes to the spread of fake content, complicating the distinction between truth and deception, and potentially undermining public understanding and accountability in digital information.
Mapping the Misuse of Generative AI
New research from Google DeepMind and partners analyzes the misuse of generative AI, identifying tactics like exploitation and compromise. It suggests initiatives for public awareness and safety to combat these issues.
Anyone Can Turn You into an AI Chatbot
The creation of unauthorized AI chatbots using real individuals' likenesses raises ethical concerns, as seen in a case involving a bot mimicking a murdered woman, highlighting inadequate platform policies and legal protections.
Related
Apple trained AI models on YouTube content without consent
Tech giants, like Apple, used YouTube video subtitles without creators' consent for AI training. Concerns over legality and ethics arise as companies leverage third-party datasets, impacting creators and raising AI training ethics issues.
YouTube creators surprised to find Apple and others trained AI on their videos
YouTube creators express surprise as tech giants Apple, Salesforce, and Anthropic train AI models on YouTube videos without consent. Dataset "the Pile" by EleutherAI includes content from popular creators and media brands. Ethical concerns arise.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn that generative AI contributes to the spread of fake content, complicating the distinction between truth and deception, and potentially undermining public understanding and accountability in digital information.
Mapping the Misuse of Generative AI
New research from Google DeepMind and partners analyzes the misuse of generative AI, identifying tactics like exploitation and compromise. It suggests initiatives for public awareness and safety to combat these issues.
Anyone Can Turn You into an AI Chatbot
The creation of unauthorized AI chatbots using real individuals' likenesses raises ethical concerns, as seen in a case involving a bot mimicking a murdered woman, highlighting inadequate platform policies and legal protections.