The EU's AI Act is now in force
The EU's AI Act, effective August 1, 2024, establishes compliance timelines and risk tiers for AI applications, imposing strict regulations on high-risk systems and lighter requirements for general-purpose AIs.
Read original articleThe European Union's AI Act officially came into effect on August 1, 2024, initiating a phased compliance timeline for AI developers and applications. The Act categorizes AI applications into three risk tiers: low/no-risk applications are exempt, while high-risk applications, such as biometrics and AI in healthcare, must adhere to strict compliance and quality management obligations, including pre-market assessments and potential regulatory audits. High-risk systems used by public authorities must be registered in an EU database. A "limited risk" category includes technologies like chatbots, which must meet transparency requirements. Penalties for non-compliance vary, with fines reaching up to 7% of global annual turnover for violations of banned applications.
Developers of general-purpose AIs (GPAIs) face lighter transparency requirements but must summarize training data and ensure copyright compliance. Only the most powerful GPAIs, defined by their computational capacity, will need to conduct risk assessments. Enforcement of the Act's general rules will be managed by member state bodies, while GPAI regulations will be enforced at the EU level. The specific compliance requirements for GPAI developers are still under discussion, with Codes of Practice expected to be finalized by April 2025. OpenAI has indicated its intention to collaborate with the EU AI Office during the implementation of the Act, providing guidance for compliance. The European Commission has tasked standards bodies with developing detailed requirements for high-risk AI systems, with a deadline set for April 2025.
Related
My take is that this act is an attempt by the EU to build some AI companies. They failed at the cloud - where China massively succeeded. They do not want to make the same mistake again.
There are provisions to (1) cause trouble for entrants (certifying LLMs), (2) provide national infra for testing (subsidize).
What exactly does this mean? Will AI companies allow copyright holders to opt-out of having their data or works used as part of training? Are there tools to prune data from already trained models? I'm not sure how questions related to copyright are going to be resolved with existing AI models.
> no recording in public
this is the only one that sounds like good common sense but isn't even regulation it's just law any sort of legal system would enforce.
all these "safety" things means ai offerings are all useless like google after 2003 when anything you query like "list of fad diet" is transformed into instead of an answer to that "problematic" question, an answer to "NVM LOL HERE IS A LIST OF HEALTHY DIETS"
> can't train on copyright material
that's so dumb. this world has no interesting copyright content coming out for 2 decades, it's literally all sellouts. losing to AI (which will happen regardless of whether the training material contains any copyright content) is exactly what they deserve. don't forget that these are also people who sue random civilians for "damages" that only exit in their head. sellout culture no longer even applies to musicians and book authors, it's also the majority of software too, with people just creating software for part of their resume, or some crowdsource shit, or a blog. any software anyone makes in a corporation is just to host an ad or be a dev tool for people who just host an ad
https://time.com/6288245/openai-eu-lobbying-ai-act/
>Still, OpenAI’s lobbying effort appears to have been a success: the final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called “foundation models,” or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments. OpenAI supported the late introduction of “foundation models” as a separate category in the Act, a company spokesperson told TIME.
Given how much we see these 'general' systems in political propaganda (think of all the Twitter far right propaganda unmasked by users replying with 'ignore previous instructions') I'm not sure I agree that these models do not fall under the high risk label.
The rules for high-risk systems are understandable, mostly increased human oversight and increased transparency: https://artificialintelligenceact.eu/section/3-2/
I understand that OpenAI does not want increased transparency in line with being the least transparent AI company out there. It's not like they don't have the money to follow these rules.