EGAIR – European Guild for Artificial Intelligence Regulation
The European Guild for Artificial Intelligence Regulation (EGAIR) advocates for AI regulation in Europe to protect data rights and artistic integrity. They propose consent requirements for AI training data and collaborate with experts to lobby for EU regulations.
Read original articleThe European Guild for Artificial Intelligence Regulation (EGAIR) is a coalition of artists, creatives, and associations in Europe advocating for the regulation of AI companies. They highlight the exploitation of data and intellectual property without consent, particularly concerning generative AIs using copyrighted material for profit. EGAIR proposes regulations requiring explicit consent for AI training data, prohibiting unlicensed use of names and media, and establishing transparency in AI-generated content. The group collaborates with legal and human rights experts to lobby for EU-level regulations and raise awareness in the creative community. EGAIR's efforts have gained support from over 8,000 individuals and prominent figures in the arts industry. The organization aims to address specific EU issues related to AI regulation and emphasizes the need for collective action to protect artistic integrity and data rights. Additionally, EGAIR provides opportunities for professionals to join their cause and contribute to shaping AI legislation in Europe.
Related
OpenAI and Anthropic are ignoring robots.txt
Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.
We need an evolved robots.txt and regulations to enforce it
In the era of AI, the robots.txt file faces limitations in guiding web crawlers. Proposals advocate for enhanced standards to regulate content indexing, caching, and language model training. Stricter enforcement, including penalties for violators like Perplexity AI, is urged to protect content creators and uphold ethical AI practices.
My Memories Are Just Meta's Training Data Now
Meta's use of personal content from Facebook and Instagram for AI training raises privacy concerns. European response led to a temporary pause, reflecting the ongoing debate on tech companies utilizing personal data for AI development.
Colorado has a first-in-the-nation law for AI – but what will it do?
Colorado enforces pioneering AI regulations for companies starting in 2026. The law mandates disclosure of AI use, data correction rights, and complaint procedures to address bias concerns. Experts debate its enforcement effectiveness and impact on technological progress.
Hackers 'jailbreak' powerful AI models in global effort to highlight flaws
Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.
I'm sympathetic to artists that want to exclude their works from giant training datasets that primarily end up benefiting the big players in AI without giving anything back. In a way, the first big digital data heist on mankind was executed by social media. Founders of various social media sites became extremely rich thanks to regular people's content being posted and shared (often illegally), without giving anything back to creators.
Will the rise of AI reduce opportunities for creatives? Almost certainly, but unlike essentially all other industries, it won't be wiped out by the rise of automation because humans won't stop liking things that are created by humans. As in music, there will be a shift to performance where customers / clients are engaged in the creative process. In many ways, it will be a return to something like the golden age of portraiture as people pay for engagement.
There are huge opportunities for creatives in the age of AI to create new art forms, created in new ways for new kinds of consumer. Creatives can choose to engage with that or to throw sabots. As London's liverymen show, guilds cannot stop the tide, the opportunity is to become something new that floats on the rising waters.
1. Create a new type of copyright called "training right".
2. This "training right" also applies to names.
3. This "training right" also applies to all usage with AI, even if it doesn't involve training (e.g. as an input to the AI software, such as Img2Img).
4. All AI-generated materials to be captioned as such, and all their activities catalogued and logged.
5. Public domain is no longer public domain (public domain has no "training rights" by default), and freely licensed media is no longer freely licensed (freely licensed media has no "training rights" by default) because "it would not have been possible to foresee its use in a dataset to train an AI model".
> People: I don't want my stuff to be used to train models.
> Companies: To use our service you grant us perpetual license to use your stuff in whatever way we want, and also the right to sublicense so we can sell your stuff to others while granting them the same rights.
> People: Sure, here you go! Here's my art/code/voice/face/photos/videos/telemetry!
> Companies: [use data according to the license that was granted to them]
> People: pikachu_face.jpg
So enforcing what the manifesto wants to enforce wouldn't change much, if anything at all.
Disclaimer: I have pirated others' stuff (e.g. anime, manga, novels, music, I have shared memes with others (that's distribution), etc), so I can't complain when others pirate my stuff without being a hypocrite. The most I can do myself is call them out for profiting off it.
This is morally right, it doesn't matter what will happen in the material world. We will always have God and spirituality to comfort us.
Related
OpenAI and Anthropic are ignoring robots.txt
Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.
We need an evolved robots.txt and regulations to enforce it
In the era of AI, the robots.txt file faces limitations in guiding web crawlers. Proposals advocate for enhanced standards to regulate content indexing, caching, and language model training. Stricter enforcement, including penalties for violators like Perplexity AI, is urged to protect content creators and uphold ethical AI practices.
My Memories Are Just Meta's Training Data Now
Meta's use of personal content from Facebook and Instagram for AI training raises privacy concerns. European response led to a temporary pause, reflecting the ongoing debate on tech companies utilizing personal data for AI development.
Colorado has a first-in-the-nation law for AI – but what will it do?
Colorado enforces pioneering AI regulations for companies starting in 2026. The law mandates disclosure of AI use, data correction rights, and complaint procedures to address bias concerns. Experts debate its enforcement effectiveness and impact on technological progress.
Hackers 'jailbreak' powerful AI models in global effort to highlight flaws
Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.