June 24th, 2024

Colorado has a first-in-the-nation law for AI – but what will it do?

Colorado enforces pioneering AI regulations for companies starting in 2026. The law mandates disclosure of AI use, data correction rights, and complaint procedures to address bias concerns. Experts debate its enforcement effectiveness and impact on technological progress.

Read original articleLink Icon
Colorado has a first-in-the-nation law for AI – but what will it do?

Colorado has become the first state in the U.S. to implement comprehensive regulations on the use of artificial intelligence (AI) systems in decision-making processes within companies. The new law, set to take effect in 2026, aims to protect the public from potential bias or discrimination embedded in AI systems. It requires companies to disclose when AI is being used and allows individuals to correct input data or file complaints if they feel unfairly treated. The law covers industries such as education, employment, finance, healthcare, and more, focusing on consequential decisions involving AI. While some experts believe the law lacks teeth to enforce changes in company practices, others see it as a necessary step towards transparency in AI decision-making. Governor Jared Polis expressed concerns about the law's impact on technological advancements but hopes it will spark a national conversation on AI regulation. The law is seen as a work in progress, aiming to balance AI's business benefits with fairness and reliability for individuals affected by its decisions.

Link Icon 8 comments
By @tensor - 4 months
It's unfortunate that these laws focus on AI. Why isn't it illegal to use traditional algorithms that have bias? Why aren't there regulations ensuring bias testing when humans themselves make decisions?

In all cases the bias originates from human behaviour. The one advantage of AI being used is that now that bias is surfaced. But it's always been there, which is why the machine learning algorithms learn it. It's just that no one typically looks at the data in aggregate prior to AI.

In any case, these rules should not be scoped to AI in my opinion. They should include algorithms and also require bias testing for teams of humans making decisions. If anything, I'd trust an AI system that has been analyzed for bias over a human making a decision.]

By @yuliyp - 4 months
So this actually sounds like a realistic approach at managing the dangers of AI. It ensures that people have some sort of recourse against algorithms deciding to make their life miserable. This feels like an extension of similar approaches used in credit: credit rating agencies have to allow you to look at the data used, and are required to have flows for people to challenge the data there that may be harming them.

Certainly it's a very different approach from people trying to mandate that AIs must be designed in ways so that they can't be used for bad stuff (which to me feels like a fundamentally broken approach).

By @flaque - 4 months
> Whether (people) get insurance, or what the rate for their insurance is, or legal decisions or employment decisions, whether you get fired or hired, could be up to an AI algorithm

This is a bit like trying to regulate horseshoes while everyone else is talking about speed limits & seat belts. Both parties say the word "carriage" and "passenger", but they have completely different ideas in their heads about what is about to happen.

By @jqpabc123 - 4 months
People make bad decisions too --- but they rarely do so alone. Try hiring someone in an established company without multiple levels of review.

The problem with AI is that we know these models are flawed but they are being implemented anyway in an effort to save money.

If you have to manually review all AI results, the cost savings start to evaporate. Particularly if it leads to lawsuits.

Imagine trying to explain in court how/why AI decided to fire someone.

The real culprit here is greed.

By @ForHackernews - 4 months
Same thing the Colorado law that required employers to include salaries in job ads did: exclude Colorado residents.
By @w-ll - 4 months
I thought Tennessee was first due to their massive music industry.

https://www.npr.org/2024/03/22/1240114159/tennessee-protect-...

By @geodel - 4 months
For starters, catch the violators and fine them.