Copied Act would make removing AI digital watermarks illegal
The COPIED Act aims to protect creators by regulating AI-generated content through authentication standards and legal repercussions for unauthorized use, garnering support from industry groups for transparency and accountability.
Read original articleThe COPIED Act, introduced by a bipartisan group of senators, aims to protect journalists and artists from unauthorized use of their work by AI models. The bill directs the National Institute of Standards and Technology to create standards for authenticating and detecting AI-generated content, including the use of digital watermarks. It requires AI tools to allow users to attach information about the content's origin, which cannot be removed. Content owners can sue companies for unauthorized use or tampering with authentication markers, with enforcement by state attorneys general and the Federal Trade Commission. The bill has garnered support from various publishing and artists' groups, emphasizing the need for transparency and accountability in AI-generated content to safeguard individuals' rights over their likeness and work. The COPIED Act is part of a series of AI-related bills in the Senate, reflecting a growing effort to regulate AI technology effectively.
Related
Colorado has a first-in-the-nation law for AI – but what will it do?
Colorado enforces pioneering AI regulations for companies starting in 2026. The law mandates disclosure of AI use, data correction rights, and complaint procedures to address bias concerns. Experts debate its enforcement effectiveness and impact on technological progress.
Microsoft CEO of AI Your online content is 'freeware' fodder for training models
Mustafa Suleyman, CEO of Microsoft AI, faced legal action for using online content as "freeware" to train neural networks. The debate raises concerns about copyright, AI training, and intellectual property rights.
The Center for Investigative Reporting Is Suing OpenAI and Microsoft
The Center for Investigative Reporting (CIR) sues OpenAI and Microsoft for copyright infringement, alleging unauthorized use of stories impacting relationships and revenue. Legal action mirrors media concerns over tech companies using journalistic content without permission.
AI Companies Need to Be Regulated: Open Letter
AI companies face calls for regulation due to concerns over unethical practices highlighted in an open letter by MacStories to the U.S. Congress and European Parliament. The letter stresses the need for transparency and protection of content creators.
OpenAI pleads it can't make money with o using copyrighted material for free
OpenAI requests British Parliament to permit copyrighted material for AI training. Facing legal challenges from NYT and Authors Guild for alleged copyright infringement. Debate impacts AI development and copyright protection, raising concerns for content creators.
What's an AI watermark?
What's a work of art?
How inundated by works containing supposedly unique digital signatures nonetheless susceptible to collision attacks do they wish to suffer?
The Law cannot match Mathematics in formulation, nor can it match it in meaning.
I wish mathematicians spent more time educating legislators so the rest of us can think and communicate in peace and clarity.
> The bill also directs NIST to develop cybersecurity measures to prevent tampering with provenance and watermarking on AI content.
So somehow NIST would have to develop a cybersecurity measure to stop me from making an AI generated image and just screenshotting it, discarding any metadata. I'm not sure how realistic this is.
If the watermark were embedded into the actual pixels of the image then of course it'd be preserved by a screenshot, but my understanding is that those watermarks are not very robust to the image being resized, compressed or cropped
At the end of the day you can also just take a picture of an AI-generated image on your screen using a camera (bonus points if you're in a future where cameras embed digital signatures to verify the image is authentic and not AI generated)
The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to create standards and guidelines that help prove the origin of content and detect synthetic content, like through watermarking. It also directs the agency to create security measures to prevent tampering and requires AI tools for creative or journalistic content to let users attach information about their origin and prohibit that information from being removed. Under the bill, such content also could not be used to train AI models.
I thought the headline was indicating it would be illegal to remove watermarks from content generated by LLMs.
Good luck enforcing that in open source tools.
> Such content also could not be used to train AI models.
And... how would you make sure it won’t happen?
I don’t think they know how the digital world works, let alone generative AI. You might regulate big companies, but never the community, not to mention companies that are outside of the US that can still access and train their models on such materials.
Related
Colorado has a first-in-the-nation law for AI – but what will it do?
Colorado enforces pioneering AI regulations for companies starting in 2026. The law mandates disclosure of AI use, data correction rights, and complaint procedures to address bias concerns. Experts debate its enforcement effectiveness and impact on technological progress.
Microsoft CEO of AI Your online content is 'freeware' fodder for training models
Mustafa Suleyman, CEO of Microsoft AI, faced legal action for using online content as "freeware" to train neural networks. The debate raises concerns about copyright, AI training, and intellectual property rights.
The Center for Investigative Reporting Is Suing OpenAI and Microsoft
The Center for Investigative Reporting (CIR) sues OpenAI and Microsoft for copyright infringement, alleging unauthorized use of stories impacting relationships and revenue. Legal action mirrors media concerns over tech companies using journalistic content without permission.
AI Companies Need to Be Regulated: Open Letter
AI companies face calls for regulation due to concerns over unethical practices highlighted in an open letter by MacStories to the U.S. Congress and European Parliament. The letter stresses the need for transparency and protection of content creators.
OpenAI pleads it can't make money with o using copyrighted material for free
OpenAI requests British Parliament to permit copyrighted material for AI training. Facing legal challenges from NYT and Authors Guild for alleged copyright infringement. Debate impacts AI development and copyright protection, raising concerns for content creators.