A group appointed by Gov. Jared Polis to come up with a framework to implement a state law regulating “consequential” decision making by artificial intelligence systems has released a proposal on how to do it.
The AI Policy Working Group detailed its findings Tuesday. A 2024 law aimed at creating the regulations has never gone into effect and the state legislature delayed its implementation date to June 2026 because of disagreements about how it would be set up to balance consumer protections with innovation and workability.
The working group recommendations still need to be ironed out at the State Capitol in a bill and would need to clear the legislature this session.
“I am very grateful to the hardworking members of the Colorado AI Policy Working Group that have reached a unanimous agreement on AI policy to protect consumers and support innovation in our state,” Polis said in a written statement.
Colorado has been grappling with its AI policy since 2024, when lawmakers passed the country's first comprehensive law to regulate how companies and governments use artificial intelligence to make key decisions over people's lives. The law isn’t aimed at deepfakes or fraud, but applies to how AI is used in evaluating people for things like school applications, hiring, loans, access to health care or insurance. The goal, bill sponsors said at the time, is to prevent discrimination.
The working group tried to tackle the thorniest issues on transparency and liability. It recommends that an AI system developer would need to describe key parts of the system to people who use it to make key decisions, the deployers, on things such as the tool’s intended use, categories of data used to train the system, its limitations, and instructions on appropriate monitoring, and “meaningful human review, where applicable.”
The user of the AI system would also need to tell the public, in plain language, the role the AI system played in making a consequential decision. A business, government entity, or school that uses AI to make a consequential decision would also need to provide “a clear and conspicuous notice to consumers.”
When something goes wrong, liability would be assigned to the deployers and developers based on the role they played in what went wrong, the working group recommends. For instance, existing anti-discrimination and consumer protections laws would consider questions like, did the deployer use the system the way it was advertised, configured, or contracted?
The attorney general’s office would create rules on disclosures a deployer must provide to a consumer following an adverse outcome involving an AI system.
The Colorado Technology Association, which participated in the working group said it voted to move the framework forward based on “targeted revisions.”
“We look forward to seeing this progress reflected in the forthcoming legislation and to continuing the dialogue on how to protect consumers while enabling innovation to thrive,” said CTA President Brittany Morris Saunders.
Democratic Rep. Brianna Titone of Arvada was a sponsor of the 2024 law. She said the recommendations are a good place to start, but said it’s not clear that a bill with these parameters can get through the legislative process without significant changes.
“While the voting members did agree, there were many caveats to their ‘yes’ votes. It's a meaningful step forward, but only if the proposed bill can stay on this trajectory,” Titone said.
Democratic Senate Majority Leader Robert Rodriguez was the main sponsor of the 2024 law. He said he appreciates the task force’s work and looks forward to reading the recommendations.
“The devil's in the details. I've expressed to the governor's office my priority is to make sure that they're letting the consumer know that they have access to the decision, and an opportunity to correct. Those are my core values,” said Rodriguez.
Rodriguez said he wants the law to have teeth.
To read more stories from Colorado Public Radio, visit www.cpr.org.


