Colorado AI Act: Deployer Duties
Deployer Duties [C.R.S. § 6-1-1703]
Citation: § 6-1-1703 (deployer duties), Section 6-1-1703
Q: What must AI deployers do under the Colorado AI Act? A: Deployers must use reasonable care, implement a risk management program, complete impact assessments, and provide consumer disclosures before using AI for consequential decisions [§ 6-1-1703].
Key rule (§ 6-1-1703): Deployers of high-risk AI systems must implement risk management programs aligned with NIST AI RMF or ISO 42001, complete annual impact assessments, and notify consumers before and after consequential decisions.
Rule: Deployers have ongoing obligations to manage AI risks and inform consumers when AI makes decisions that affect them.
Core Duty: Reasonable Care [§ 6-1-1703(1)]
A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.
Rebuttable presumption: A deployer is presumed to have used reasonable care if they comply with § 6-1-1703 requirements.
Risk Management Program [§ 6-1-1703(2)]
Deployers must implement and maintain a risk management policy and program that:
| Requirement | Description |
|---|---|
| Identifies risks | Document known/foreseeable discrimination risks |
| Mitigates risks | Implement measures to reduce risks |
| Is iterative | Continuously updated |
| Aligns with frameworks | NIST AI RMF or ISO 42001 |
Recognized frameworks:
- NIST Artificial Intelligence Risk Management Framework
- ISO/IEC 42001
- Other nationally/internationally recognized frameworks
Impact Assessments [§ 6-1-1703(3)]
Deployers must complete impact assessments for each high-risk AI system:
| Requirement | Timing |
|---|---|
| Initial assessment | Before deployment |
| Annual updates | Every year |
| After substantial modification | When system changes significantly |
| Retention | At least 3 years |
Required Assessment Contents
- Purpose and intended use of the AI system
- Analysis of discrimination risks
- Data used by the system
- Outputs and decisions made
- Mitigation measures implemented
- Monitoring procedures
Consumer Disclosures [§ 6-1-1703(4)-(5)]
Before Consequential Decision [§ 6-1-1703(4)]
Before using AI for a consequential decision, deployers must notify consumers of:
| Disclosure | Description |
|---|---|
| AI is being used | That a high-risk AI system is in use |
| Plain language description | What the system does |
| Nature of decision | What decision is being made |
| Contact information | How to reach the deployer |
| How to access statements | Where to find more information |
After Adverse Decision [§ 6-1-1703(5)]
If the consequential decision is adverse to the consumer, deployers must provide:
| Disclosure | Description |
|---|---|
| Principal reasons | Why the decision was made |
| AI’s role | How the AI contributed to the decision |
| Data used | Types of data processed and sources |
| Correction opportunity | Chance to correct personal data |
| Appeal opportunity | Human review when feasible |
Small Deployer Exception [§ 6-1-1703(6)]
Some requirements don’t apply if:
- AI is used for intended uses disclosed by developer
- AI continues learning from non-deployer data sources
- Deployer provides consumers with developer’s impact assessment
Waived requirements:
- Risk management program (subsection 2)
- Impact assessments (subsection 3)
- Adverse decision disclosures (subsection 5)
Deployer Checklist
Before deploying a high-risk AI system:
- Implement risk management policy aligned with NIST/ISO
- Complete initial impact assessment
- Establish consumer disclosure process
- Create adverse decision notification process
- Set up appeal/correction mechanisms
- Schedule annual impact assessment updates
- Establish 3-year retention for assessments