Colorado has a first-in-the-nation law for AI – but what will it do?
Colorado enforces pioneering AI regulations for companies starting in 2026. The law mandates disclosure of AI use, data correction rights, and complaint procedures to address bias concerns. Experts debate its enforcement effectiveness and impact on technological progress.
Read original articleColorado has become the first state in the U.S. to implement comprehensive regulations on the use of artificial intelligence (AI) systems in decision-making processes within companies. The new law, set to take effect in 2026, aims to protect the public from potential bias or discrimination embedded in AI systems. It requires companies to disclose when AI is being used and allows individuals to correct input data or file complaints if they feel unfairly treated. The law covers industries such as education, employment, finance, healthcare, and more, focusing on consequential decisions involving AI. While some experts believe the law lacks teeth to enforce changes in company practices, others see it as a necessary step towards transparency in AI decision-making. Governor Jared Polis expressed concerns about the law's impact on technological advancements but hopes it will spark a national conversation on AI regulation. The law is seen as a work in progress, aiming to balance AI's business benefits with fairness and reliability for individuals affected by its decisions.
Related
Public servants uneasy as government 'spy' robot prowls federal offices
Public servants in Gatineau are uneasy as a robot from the VirBrix platform optimizes workspaces by collecting data on air quality and light levels. Despite assurances, the Government Services Union expresses privacy concerns.
AI is exhausting the power grid
Tech firms, including Microsoft, face a power crisis due to AI's energy demands straining the grid and increasing emissions. Fusion power exploration aims to combat fossil fuel reliance, but current operations heavily impact the environment.
OpenAI and Anthropic are ignoring robots.txt
Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
AI can't fix what automation already broke
Generative AI aids call center workers by detecting distress and providing calming family videos. Criticism arises on AI as a band-aid solution for automation-induced stress, questioning its effectiveness and broader implications.
In all cases the bias originates from human behaviour. The one advantage of AI being used is that now that bias is surfaced. But it's always been there, which is why the machine learning algorithms learn it. It's just that no one typically looks at the data in aggregate prior to AI.
In any case, these rules should not be scoped to AI in my opinion. They should include algorithms and also require bias testing for teams of humans making decisions. If anything, I'd trust an AI system that has been analyzed for bias over a human making a decision.]
Certainly it's a very different approach from people trying to mandate that AIs must be designed in ways so that they can't be used for bad stuff (which to me feels like a fundamentally broken approach).
This is a bit like trying to regulate horseshoes while everyone else is talking about speed limits & seat belts. Both parties say the word "carriage" and "passenger", but they have completely different ideas in their heads about what is about to happen.
The problem with AI is that we know these models are flawed but they are being implemented anyway in an effort to save money.
If you have to manually review all AI results, the cost savings start to evaporate. Particularly if it leads to lawsuits.
Imagine trying to explain in court how/why AI decided to fire someone.
The real culprit here is greed.
https://www.npr.org/2024/03/22/1240114159/tennessee-protect-...
Related
Public servants uneasy as government 'spy' robot prowls federal offices
Public servants in Gatineau are uneasy as a robot from the VirBrix platform optimizes workspaces by collecting data on air quality and light levels. Despite assurances, the Government Services Union expresses privacy concerns.
AI is exhausting the power grid
Tech firms, including Microsoft, face a power crisis due to AI's energy demands straining the grid and increasing emissions. Fusion power exploration aims to combat fossil fuel reliance, but current operations heavily impact the environment.
OpenAI and Anthropic are ignoring robots.txt
Two AI startups, OpenAI and Anthropic, are reported to be disregarding robots.txt rules, allowing them to scrape web content despite claiming to respect such regulations. TollBit analytics revealed this behavior, raising concerns about data misuse.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
AI can't fix what automation already broke
Generative AI aids call center workers by detecting distress and providing calming family videos. Criticism arises on AI as a band-aid solution for automation-induced stress, questioning its effectiveness and broader implications.