USColorado

Colorado AI Act: Deployer Duties

Deployer Duties [C.R.S. § 6-1-1703]

Citation: § 6-1-1703 (deployer duties), Section 6-1-1703

Q: What must AI deployers do under the Colorado AI Act? A: Deployers must use reasonable care, implement a risk management program, complete impact assessments, and provide consumer disclosures before using AI for consequential decisions [§ 6-1-1703].

Key rule (§ 6-1-1703): Deployers of high-risk AI systems must implement risk management programs aligned with NIST AI RMF or ISO 42001, complete annual impact assessments, and notify consumers before and after consequential decisions.

Rule: Deployers have ongoing obligations to manage AI risks and inform consumers when AI makes decisions that affect them.


Core Duty: Reasonable Care [§ 6-1-1703(1)]

A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.

Rebuttable presumption: A deployer is presumed to have used reasonable care if they comply with § 6-1-1703 requirements.


Risk Management Program [§ 6-1-1703(2)]

Deployers must implement and maintain a risk management policy and program that:

RequirementDescription
Identifies risksDocument known/foreseeable discrimination risks
Mitigates risksImplement measures to reduce risks
Is iterativeContinuously updated
Aligns with frameworksNIST AI RMF or ISO 42001

Recognized frameworks:

  • NIST Artificial Intelligence Risk Management Framework
  • ISO/IEC 42001
  • Other nationally/internationally recognized frameworks

Impact Assessments [§ 6-1-1703(3)]

Deployers must complete impact assessments for each high-risk AI system:

RequirementTiming
Initial assessmentBefore deployment
Annual updatesEvery year
After substantial modificationWhen system changes significantly
RetentionAt least 3 years

Required Assessment Contents

  • Purpose and intended use of the AI system
  • Analysis of discrimination risks
  • Data used by the system
  • Outputs and decisions made
  • Mitigation measures implemented
  • Monitoring procedures

Consumer Disclosures [§ 6-1-1703(4)-(5)]

Before Consequential Decision [§ 6-1-1703(4)]

Before using AI for a consequential decision, deployers must notify consumers of:

DisclosureDescription
AI is being usedThat a high-risk AI system is in use
Plain language descriptionWhat the system does
Nature of decisionWhat decision is being made
Contact informationHow to reach the deployer
How to access statementsWhere to find more information

After Adverse Decision [§ 6-1-1703(5)]

If the consequential decision is adverse to the consumer, deployers must provide:

DisclosureDescription
Principal reasonsWhy the decision was made
AI’s roleHow the AI contributed to the decision
Data usedTypes of data processed and sources
Correction opportunityChance to correct personal data
Appeal opportunityHuman review when feasible

Small Deployer Exception [§ 6-1-1703(6)]

Some requirements don’t apply if:

  1. AI is used for intended uses disclosed by developer
  2. AI continues learning from non-deployer data sources
  3. Deployer provides consumers with developer’s impact assessment

Waived requirements:

  • Risk management program (subsection 2)
  • Impact assessments (subsection 3)
  • Adverse decision disclosures (subsection 5)

Deployer Checklist

Before deploying a high-risk AI system:

  • Implement risk management policy aligned with NIST/ISO
  • Complete initial impact assessment
  • Establish consumer disclosure process
  • Create adverse decision notification process
  • Set up appeal/correction mechanisms
  • Schedule annual impact assessment updates
  • Establish 3-year retention for assessments

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt