EU

EU AI Act: Provider and Deployer Obligations for High-Risk AI Systems

Provider and Deployer Obligations for High-Risk AI Systems [Art 8-50]

Rule: Providers and deployers of high-risk AI systems must meet strict requirements covering risk management, data governance, documentation, conformity assessment, CE marking, registration, and ongoing monitoring. Compliance mandatory before market placement.

Effective: August 2, 2026 (high-risk system requirements)

Overview

Chapter III of the AI Act establishes a comprehensive regulatory framework for high-risk AI systems (listed in Annex III). The framework follows the “New Legislative Framework” model used for product safety regulation across the EU.

Key principle: Only compliant high-risk AI systems may be placed on EU market or put into service.

Chapter III Structure

SectionArticlesCoverage
Section 16-7Classification of high-risk AI systems
Section 28-15Technical requirements for high-risk systems
Section 316-27Obligations of providers, deployers, and supply chain actors
Section 428-39Notifying authorities and notified bodies
Section 540-50Standards, conformity assessment, CE marking, registration

Section 2: Requirements for High-Risk AI Systems [Art 8-15]

All high-risk AI systems must meet these technical requirements before market placement.

Article 8: Compliance with Requirements

Obligation: High-risk AI systems must comply with ALL requirements in Section 2.

Design principle: Requirements apply taking into account:

  • Intended purpose
  • State of the art
  • Reasonably foreseeable risks

Continuous compliance: Requirements apply throughout system lifecycle, including after deployment.

Article 9: Risk Management System

9.1 — Core Requirement

Providers must establish, implement, document, and maintain a continuous, iterative risk management system throughout the AI system lifecycle.

9.2 — Risk Identification and Analysis

Risk management system must identify and analyze known and reasonably foreseeable risks related to:

Risk CategoryExamples
Health risksMisdiagnosis in medical AI, incorrect drug recommendations
Safety risksAutonomous vehicle accidents, industrial robot malfunctions
Fundamental rightsDiscrimination, privacy violations, freedom of expression

Scope includes:

  1. Intended use: Risks when used as provider designed
  2. Reasonably foreseeable misuse: Risks from predictable incorrect use

9.3 — Risk Assessment

For each identified risk:

  1. Estimate magnitude and likelihood
  2. Evaluate against acceptable risk thresholds
  3. Prioritize for mitigation

9.4 — Risk Mitigation Measures

Hierarchy of controls:

PriorityApproachExample
1. EliminationDesign out the riskRemove feature causing discrimination
2. MinimizationReduce risk through technical measuresImprove data quality, add constraints
3. InformationWarn users about residual risksInstructions for use, training requirements

Balance: Mitigation measures should not degrade system performance beyond what’s necessary for safety.

9.5 — Post-Market Monitoring Integration

Risk management must incorporate learnings from:

  • Incident reports
  • User feedback
  • Post-market surveillance data

Continuous improvement: Update risk assessment when new information emerges.

9.6 — Testing Requirements

Risk management system must include:

  • Testing of identified risks
  • Testing with real or representative data
  • Validation that mitigation measures work

Article 10: Data and Data Governance

10.1 — Training Data Requirements

Quality criteria:

  • Relevant: Data corresponds to system’s intended purpose
  • Representative: Covers full range of use cases and user populations
  • Free from errors: Data cleaning and validation processes
  • Complete: No critical data gaps

Bias mitigation: Training, validation, and testing datasets must be:

  • Examined for possible biases
  • Appropriate statistical properties considered
  • Representative of persons/groups on whom system will be used

10.2 — Data Governance Practices

Providers must implement data management practices including:

PracticeDescription
Design choicesDecisions on data collection, preprocessing, formulation
Data collectionHow data is obtained, from what sources
Data preparationCleaning, labeling, enrichment operations
Formulation assumptionsMathematical representations, feature engineering
Assessment methodsHow data quality is validated
Examination for biasesSystematic checks for unfair patterns

10.3 — Special Categories of Data

Prohibited unless exception applies: Processing of special categories of personal data (race, ethnicity, political opinions, religion, health, sexual orientation) to correct biases is permitted ONLY when:

  • Technically strictly necessary
  • Appropriate safeguards in place
  • Compliant with GDPR Article 9

10.4 — Data Retention

Training, validation, and testing datasets must be retained or documented throughout system lifecycle for:

  • Conformity assessment
  • Market surveillance
  • Post-market monitoring

Article 11: Technical Documentation

11.1 — Documentation Requirement

Providers must draw up technical documentation BEFORE placing system on market.

Purpose: Demonstrate system complies with AI Act requirements.

11.2 — Contents (Annex IV)

Technical documentation must include:

SectionContents
1. General descriptionIntended purpose, developer, versions, lifecycle
2. System descriptionArchitecture, algorithms, outputs, data
3. Detailed specsComputational resources, measures for accuracy/robustness/cybersecurity
4. DataTraining/validation/testing datasets, data governance, bias examination
5. Risk managementRisk management system documentation, identified risks, mitigation measures
6. Human oversightMeasures enabling human oversight, role descriptions
7. Accuracy/robustnessMetrics, test results, performance under stress
8. CybersecuritySecurity measures, vulnerability assessments
9. Quality managementQuality management system documentation
10. Changes and updatesLog of all substantial modifications
11. AssessmentResults of conformity assessment, certificates

Update requirement: Documentation must be kept current throughout system lifecycle.

11.3 — Retention Period

Technical documentation must be kept for 10 years after system placed on market or put into service.

Article 12: Record-Keeping (Automatic Logging)

12.1 — Logging Requirement

High-risk AI systems must have capabilities to automatically generate logs throughout system lifetime.

Minimum retention: Logs must be kept for at least 6 months (or longer based on applicable law).

12.2 — Logged Events

Logs must enable:

  • Traceability of system functioning
  • Monitoring throughout lifecycle
  • Post-market surveillance

12.3 — What to Log

Event TypeExamples
System activationWhen system starts/stops operating
Input dataWhat data system received
OutputsDecisions, recommendations, predictions made
User interactionsHuman oversight actions
AnomaliesErrors, malfunctions, unexpected behavior

Purpose: Enable investigation of incidents, discrimination claims, performance issues.

12.4 — Log Access and Protection

Cybersecurity: Logs must be protected against tampering and unauthorized access.

GDPR compliance: Personal data in logs must comply with data protection requirements.

Article 13: Transparency and Information to Deployers

13.1 — Instructions for Use

Providers must provide clear, comprehensive, and easily accessible instructions for use in appropriate languages.

13.2 — Mandatory Information

Instructions must contain:

InformationDetails
Identity and contactProvider name, address, authorized representative
Intended purposeSpecific use for which system is designed
Level of accuracyMetrics, expected performance, limitations
Robustness measuresHow system handles errors, stress, attacks
Known limitationsScenarios where system may fail
Foreseeable misuseHow system might be misused and risks
Human oversightHow to implement effective oversight
Computational resourcesHardware/software requirements
LifespanExpected operational lifetime, maintenance needs

13.3 — Characteristics of System

Instructions must specify:

  • Input data requirements (format, quality, relevance)
  • Output interpretation (what results mean, confidence levels)
  • Changes from previous versions (if applicable)

Article 14: Human Oversight

14.1 — Core Requirement

High-risk AI systems must be designed to enable effective human oversight during use.

Purpose: Prevent or minimize risks to health, safety, or fundamental rights.

14.2 — Human Oversight Measures

Systems must enable oversight persons to:

CapabilityDescription
UnderstandFully comprehend system capabilities and limitations
AwarenessRemain aware of automation bias tendency
InterpretCorrectly interpret system output
OverrideDecide not to use system or disregard/reverse output
InterveneInterrupt system operation immediately

14.3 — Interface Requirements

Human-machine interface must:

  • Present information in clear, understandable format
  • Enable real-time monitoring
  • Provide override/emergency stop mechanisms
  • Alert human when intervention needed

14.4 — Competence Requirements

Oversight persons must have:

  • Appropriate competence
  • Training specific to the system
  • Authority to act on decisions

Article 15: Accuracy, Robustness, and Cybersecurity

15.1 — Accuracy Requirements

Obligation: Systems must achieve appropriate level of accuracy throughout lifecycle.

Metrics: Accuracy must be:

  • Declared by provider in technical documentation
  • Measured using appropriate metrics for system type
  • Maintained within declared ranges during operation

15.2 — Robustness Requirements

Systems must be resilient against:

ThreatMitigation
ErrorsError handling, graceful degradation
FaultsRedundancy, failsafes
InconsistenciesInput validation, anomaly detection
Adversarial attacksDefensive mechanisms, input sanitization

15.3 — Cybersecurity Requirements

Security by design: Systems must be secured against unauthorized access, modification, or data theft.

Technical measures:

  • Authentication and authorization
  • Encryption of data in transit and at rest
  • Secure logging
  • Vulnerability management
  • Incident response procedures

State of the art: Security measures must reflect current best practices.

Section 3: Obligations of Providers and Deployers [Art 16-27]

Article 16: Obligations of Providers

Providers of high-risk AI systems must:

16.1 — Compliance and Quality Management

  • Ensure system complies with ALL Section 2 requirements (Art 8-15)
  • Establish quality management system per Article 17
  • Keep technical documentation per Article 18
  • Preserve automatically generated logs under their control (Art 19)

16.2 — Market Access

  • Complete conformity assessment before placing on market (Art 43)
  • Draw up EU declaration of conformity (Art 47)
  • Affix CE marking on system (Art 48)
  • Register system in EU database (Art 49)

16.3 — Identification

  • Display provider name, registered trade name/trademark, contact address on system or packaging/documentation

16.4 — Post-Market Obligations

  • Take corrective actions when necessary (Art 20)
  • Inform authorities of non-compliance or risks (Art 20)
  • Cooperate with competent authorities (Art 21)
  • Ensure compliance with accessibility directives (EU 2016/2102, EU 2019/882)

16.5 — Demonstration of Conformity

  • Upon request, demonstrate conformity to national authorities in language they understand

Article 17: Quality Management System

17.1 — Core Requirement

Providers must establish, document, implement, and maintain a quality management system ensuring compliance with AI Act.

17.2 — System Components

Quality management system must systematically address:

ComponentCoverage
StrategyRegulatory compliance plan, resource allocation
Design & developmentRequirements specification, design controls, verification
Technical documentationDocumentation procedures, version control
Quality controlTesting, validation, acceptance criteria
Post-market monitoringSurveillance plan, incident handling
CommunicationProcedures for information to authorities and users
AccountabilityAssignment of responsibilities, management oversight

17.3 — Documented Procedures

Must include written procedures for:

  • Change management (handling substantial modifications)
  • Corrective and preventive actions
  • Risk management integration
  • Conformity assessment coordination

Article 18: Documentation Keeping

18.1 — Retention Obligation

Providers must keep ALL required documentation available for national authorities for 10 years after:

  • System placed on market, OR
  • System put into service

18.2 — Documents Covered

  • Technical documentation (Annex IV)
  • EU declaration of conformity (Art 47)
  • Quality management system documentation (Art 17)
  • Certificates from notified bodies (if applicable)
  • Updates and modifications log

18.3 — Format and Accessibility

Documentation must be:

  • In format easily accessible to authorities
  • In language(s) understandable by authorities
  • Maintained even if provider ceases operations (through successor arrangements)

Article 19: Automatically Generated Logs

19.1 — Provider Obligation

Providers must keep logs automatically generated by high-risk AI system under their control (e.g., cloud-based systems).

Minimum retention: In accordance with Article 12, at least 6 months.

19.2 — Access for Authorities

Providers must make logs available to market surveillance authorities upon request.

Article 20: Corrective Actions and Duty of Information

20.1 — Non-Compliance Discovery

When provider has reason to believe high-risk system does not conform to AI Act:

Immediate actions:

  1. Take necessary corrective actions to bring into conformity
  2. Withdraw or recall system if appropriate
  3. Inform deployers and distributors

20.2 — Serious Incident or Malfunctioning

When provider has reason to believe system presents a risk:

Immediate notification to:

  • Market surveillance authorities in Member States where system available
  • Notified body that issued certificate (if applicable)

Information to provide:

  • Description of non-conformity and corrective actions taken
  • Identification of systems affected
  • Geographic scope (where systems available)

20.3 — Serious Incidents

Definition: Incident that directly or indirectly leads to:

  • Death or serious health damage
  • Serious disruption of critical infrastructure

Notification timeline: Immediately after establishing causal link between AI system and incident.

Article 21: Cooperation with Competent Authorities

21.1 — Cooperation Duty

Upon reasoned request, providers must provide authorities with:

  • Information and documentation to demonstrate conformity
  • Access to logs
  • Cooperation in corrective actions

Language: In language easily understood by authorities.

21.2 — Response Timeline

Providers must respond to authority requests without undue delay.

Article 22: Authorized Representatives

22.1 — When Required

Providers established outside EU must appoint authorized representative established in EU before placing system on market.

22.2 — Representative Responsibilities

Authorized representative:

  • Acts on behalf of provider vis-à-vis authorities
  • Holds documentation
  • Cooperates with market surveillance
  • Can be addressed by authorities instead of provider

22.3 — Mandate Requirements

Written mandate must specify tasks and empower representative to:

  • Verify EU declaration of conformity and technical documentation are drawn up
  • Keep documentation available for 10 years
  • Provide authorities with information upon request
  • Cooperate in corrective actions

Article 23: Obligations of Importers

23.1 — Who is Importer

Entity that places on EU market a high-risk AI system bearing name/trademark of entity established outside EU.

23.2 — Key Obligations

Importers must:

  • Verify provider completed conformity assessment
  • Verify technical documentation available
  • Verify system bears CE marking and accompanied by documentation
  • Verify provider and authorized representative identified
  • Indicate own name, address on system or packaging
  • Ensure storage/transport conditions don’t jeopardize compliance
  • Keep register of non-conforming or recalled systems
  • Inform provider and authorities if system presents risk

23.3 — Market Surveillance Role

Importers act as bridge between non-EU providers and EU authorities.

Article 24: Obligations of Distributors

24.1 — Who is Distributor

Entity in supply chain (other than provider or importer) that makes high-risk AI system available on EU market.

24.2 — Key Obligations

Distributors must:

  • Verify system bears CE marking
  • Verify accompanied by documentation and instructions
  • Verify provider and importer identified
  • Ensure storage/transport conditions don’t jeopardize compliance
  • Inform provider/importer if system doesn’t appear to conform
  • Not place on market if doesn’t comply
  • Inform provider/importer and authorities if system presents risk

Article 25: Responsibilities Along the AI Value Chain

25.1 — Role Transformation

An entity may change roles:

ScenarioNew RoleReason
Importer modifies systemProviderSubstantial modification triggers provider obligations
Distributor modifies systemProviderSubstantial modification triggers provider obligations
Deployer modifies system substantiallyProviderMaking significant changes to purpose/functionality
Provider uses own systemAlso DeployerUsing system under own authority

25.2 — Substantial Modification

Definition: Change after market placement that affects compliance with requirements or intended purpose.

Triggers provider obligations:

  • New conformity assessment
  • New CE marking
  • New registration

Examples:

  • Changing algorithm significantly
  • Retraining on fundamentally different dataset
  • Changing intended purpose
  • Modifying outputs or decision logic

Article 26: Obligations of Deployers

Deployers of high-risk AI systems must:

26.1 — Compliance with Instructions

  • Use system in accordance with instructions for use
  • Implement technical and organizational measures specified
  • Ensure system used only for intended purpose

26.2 — Human Oversight

  • Assign natural persons to human oversight roles
  • Ensure oversight persons have:
    • Necessary competence
    • Appropriate training
    • Sufficient authority
    • Necessary support

26.3 — Input Data Management

When deployer controls input data:

  • Ensure data is relevant for intended purpose
  • Ensure data is sufficiently representative

26.4 — Monitoring and Logging

  • Monitor operation based on instructions for use
  • Retain automatically generated logs for:
    • At least 6 months, OR
    • Longer period as applicable law requires

26.5 — Risk and Incident Reporting

Serious incidents:

  • Immediately inform provider, importer, or distributor
  • Immediately inform relevant market surveillance authorities

Suspected risks:

  • Inform provider of suspected risks
  • Cease use if presenting risk

26.6 — Workplace Notification (Employers)

Before deployment:

  • Inform workers’ representatives of high-risk AI system use
  • Inform affected workers they will be subject to system

26.7 — Public Authority Obligations

Additional requirements for public authorities/Union institutions:

  • Conduct fundamental rights impact assessment before use (Art 27)
  • Register use in EU database (Art 49(3))
  • Comply with registration requirements
  • Ensure system registered before putting into service

26.8 — Notification to Individuals

For high-risk systems making/assisting decisions on natural persons:

  • Inform individuals they are subject to use of high-risk AI system

Exceptions:

  • Law enforcement purposes (when notification would harm investigation)

26.9 — Law Enforcement - Remote Biometric Identification

Additional safeguards for post-remote biometric identification:

  • Obtain authorization from judicial or administrative authority
  • Complete fundamental rights impact assessment
  • Annual reporting on use to competent authorities

Authorization timing:

  • Prior to use (preferred), OR
  • Within 48 hours of use (urgent cases)

Judicial review: Authorization decisions subject to judicial remedy.

Article 27: Fundamental Rights Impact Assessment (FRIA)

27.1 — Who Must Conduct

Mandatory for:

  • Public authorities
  • Union institutions/bodies
  • Private entities providing public services

Before putting high-risk AI system into service.

27.2 — Assessment Contents

FRIA must include:

ComponentDescription
DescriptionDeployer’s processes where AI will be used
Deployment periodHow long system will operate
Categories of personsWho will be affected and how
Specific risksRisks to fundamental rights identified
PurposeWhy system is being deployed
BenefitsExpected benefits justifying use
Mitigation measuresHow risks will be addressed
Complementary measuresProcedural safeguards, human oversight

27.3 — Consultation Requirements

Deployers should consult:

  • Affected persons or representatives
  • Independent experts
  • Data protection officer (if processing personal data)
  • Workers’ representatives (workplace deployment)

27.4 — Documentation and Registration

  • Document FRIA systematically
  • Upload summary to EU database (Art 49(3))
  • Make available to market surveillance authorities
  • Integrate with GDPR DPIA if processing personal data

27.5 — Update Requirement

FRIA must be updated when:

  • System modified substantially
  • Use changes significantly
  • New risks emerge from monitoring

Section 4: Notifying Authorities and Notified Bodies [Art 28-39]

Overview: Conformity Assessment Ecosystem

For certain high-risk AI systems (biometric identification/categorization, critical infrastructure), third-party conformity assessment by notified bodies is required.

Process:

  1. Member States designate notifying authorities
  2. Notifying authorities approve notified bodies
  3. Notified bodies conduct conformity assessments
  4. Commission maintains public list of notified bodies

Article 28: Notifying Authorities

Each Member State must designate a notifying authority responsible for:

  • Receiving applications from conformity assessment bodies
  • Assessing bodies’ competence
  • Notifying approved bodies to Commission
  • Monitoring notified bodies

Articles 29-33: Notified Body Requirements

Requirements for Notified Bodies

Must demonstrate:

  • Independence and impartiality
  • Technical competence in AI systems
  • Resources (personnel, equipment)
  • Procedures for conformity assessment
  • Insurance coverage for liability
  • Absence of conflicts of interest

Application and Notification Process

  1. Conformity assessment body applies to notifying authority
  2. Notifying authority assesses against requirements (Art 31)
  3. If approved, authority notifies Commission and other Member States
  4. Commission publishes body in Official Journal and database

Articles 34-39: Notified Body Operations

Operational obligations:

  • Conduct conformity assessments according to procedures
  • Ensure confidentiality
  • Issue certificates only if requirements met
  • Withdraw certificates if non-compliance discovered
  • Report to notifying authority on activities
  • Cooperate with other notified bodies

Coordination:

  • Notified bodies must coordinate through sectoral groups
  • Share best practices, harmonize assessment approaches

Section 5: Standards, Conformity Assessment, CE Marking [Art 40-50]

Article 40: Harmonized Standards

40.1 — Presumption of Conformity

High-risk AI systems complying with harmonized standards published in Official Journal are presumed to conform to requirements those standards cover.

Benefit: Streamlined compliance demonstration.

40.2 — Standard Development

European standardization organizations (CEN, CENELEC, ETSI):

  • Develop harmonized standards at Commission request
  • Cover technical requirements (Art 8-15)
  • Published in Official Journal after Commission approval

Joint Technical Committee 21 (JTC 21): Leads AI standardization for AI Act compliance.

Timeline: Standards expected before August 2, 2026 for key areas.

40.3 — Voluntary Compliance

Use of harmonized standards is voluntary. Providers may demonstrate conformity through other means.

Article 41: Common Specifications

41.1 — When Common Specifications Apply

If harmonized standards are:

  • Not available
  • Insufficient
  • Delayed

Commission may adopt common specifications via implementing acts.

41.2 — Mandatory vs. Voluntary

If harmonized standard exists: Common specifications voluntary (alternative).

If no harmonized standard: Common specifications may be made mandatory for presumption of conformity.

Article 42: Presumption of Conformity

Systems complying with harmonized standards or common specifications are presumed to conform to requirements those standards/specifications cover.

Authorities may still:

  • Request additional evidence
  • Investigate complaints
  • Challenge conformity if evidence of non-compliance

Article 43: Conformity Assessment

43.1 — General Requirement

Providers must ensure high-risk AI systems undergo conformity assessment before placing on market or putting into service.

43.2 — Assessment Procedures

Internal control (Annex VI):

  • For most high-risk systems in Annex III points 2-8
  • Provider self-assesses conformity
  • No notified body involvement

With notified body (Annex VII):

  • For biometric systems (Annex III point 1) - OPTIONAL
  • For systems not using harmonized standards

Exception - law enforcement:

  • For biometric systems used by law enforcement/migration/asylum authorities
  • Market surveillance authority acts as notified body

43.3 — Substantial Modification

Systems undergoing substantial modification must undergo new conformity assessment.

Definition: Modification after market placement affecting:

  • Compliance with requirements
  • Intended purpose

Article 44-46: Certificates and Derogations

Article 44: Certificates issued by notified bodies valid for up to 5 years, renewable.

Article 45: Appeal procedures must exist for applicants challenging notified body decisions.

Article 46: Derogation for testing - exceptionally, authorities may authorize market placement for testing before full conformity assessment.

Article 47: EU Declaration of Conformity

47.1 — Declaration Requirement

Provider must draw up written EU declaration of conformity for each high-risk AI system.

47.2 — Contents (Annex V)

Declaration must state:

  • Provider name and address
  • System name, type, version
  • Declaration that system complies with AI Act
  • References to harmonized standards or common specifications used
  • Notified body details (if applicable)
  • Signature, date, place

47.3 — Format and Language

  • Machine-readable format (electronic)
  • Physical or electronically signed
  • Translated into language(s) required by Member States

47.4 — Retention

Declaration must be kept available for 10 years after system placed on market.

Article 48: CE Marking

48.1 — Marking Requirement

High-risk AI systems meeting requirements must bear CE marking before placing on market.

48.2 — CE Marking Rules

  • Affixed visibly, legibly, indelibly on system or packaging/documentation
  • Followed by identification number of notified body (if third-party assessment)
  • Digital CE marking for software products (accessible in user interface or documentation)

48.3 — Meaning of CE Marking

CE marking indicates:

  • System complies with AI Act
  • Conformity assessment completed
  • Provider assumes full responsibility

48.4 — Prohibition

Cannot affix CE marking if system doesn’t comply. False CE marking = Article 99 penalties.

Article 49: Registration

Covered in detail in database-registration.md. See that document for full registration requirements.

Summary:

  • Providers must register in EU database before market placement
  • Deployers (public authorities) must register before putting into service
  • Registration creates public transparency

Article 50: Transparency Obligations for Certain AI Systems

50.1 — Emotion Recognition and Biometric Categorization

Deployers must inform natural persons exposed to:

  • Emotion recognition systems, OR
  • Biometric categorization systems

Timing: Before exposure occurs.

Format: Clear, understandable language.

Exception: Law enforcement when notification would prejudice investigation.

50.2 — AI-Generated Content (Deepfakes)

Deployers of AI systems generating synthetic content must:

  • Disclose that content has been artificially generated or manipulated
  • Mark content in machine-readable format
  • Make disclosure detectable

Applies to:

  • Deep fake images, audio, video
  • Synthetic text purporting to be authentic
  • Manipulated content

Exceptions:

  • AI-assisted editing within scope of creative freedom
  • Content detection/prevention systems
  • Administrative, legal proceedings (authorized use)

50.3 — Chatbots and Conversational AI

Deployers of AI systems interacting with natural persons must:

  • Inform individuals they are interacting with AI system
  • Disclosure must be made at first interaction

Unless:

  • Obvious from circumstances
  • Authorized law enforcement use

Compliance Workflow for Providers

Phase 1: Design and Development

  1. Classify system (Art 6) — Is it high-risk per Annex III?
  2. Establish quality management system (Art 17)
  3. Implement risk management system (Art 9)
    • Identify health/safety/fundamental rights risks
    • Assess reasonably foreseeable misuse
    • Implement mitigation measures
  4. Implement data governance (Art 10)
    • Ensure training data relevant, representative, free from bias
    • Document data management practices
  5. Design for human oversight (Art 14)
    • Enable understanding, override, intervention
  6. Ensure accuracy, robustness, cybersecurity (Art 15)
    • Define accuracy metrics
    • Implement error handling
    • Secure against attacks
  7. Implement automatic logging (Art 12)
    • Log all relevant events
    • Ensure tamper-proof storage
  8. Draft technical documentation (Art 11, Annex IV)
    • Complete all required sections
    • Keep under version control

Phase 2: Pre-Market Assessment

  1. Prepare instructions for use (Art 13)
    • Clear, comprehensive, in appropriate languages
    • Include all mandatory information
  2. Conduct conformity assessment (Art 43)
    • Internal control (Annex VI), OR
    • With notified body if required (Annex VII)
  3. Draw up EU declaration of conformity (Art 47, Annex V)
    • All mandatory information
    • Signed and dated
  4. Affix CE marking (Art 48)
    • On system, packaging, or documentation
    • Include notified body number if applicable

Phase 3: Market Entry

  1. Register in EU database (Art 49) BEFORE market placement
    • Complete Annex VIII Section A
    • Upload declaration and instructions
  2. Identify as provider (Art 16)
    • Display name, address on system
  3. Appoint authorized representative (Art 22) if established outside EU
  4. Place on market - System may now be sold/distributed

Phase 4: Post-Market

  1. Keep documentation (Art 18) for 10 years
  2. Implement post-market monitoring (Art 72) - continuous
  3. Keep logs (Art 19) accessible for authorities
  4. Report serious incidents (Art 20) immediately
  5. Take corrective actions (Art 20) when needed
  6. Update registration (Art 49) upon changes
  7. Cooperate with authorities (Art 21) upon request

Compliance Workflow for Deployers

Before Deployment

  1. Select compliant system - Verify CE marking, registration
  2. Review instructions for use (Art 13) - Understand requirements
  3. Assign human oversight (Art 26) - Competent, trained personnel
  4. Conduct FRIA (Art 27) if public authority
    • Assess fundamental rights impacts
    • Document mitigation measures
    • Consult stakeholders
  5. Register in EU database (Art 49(3)) if public authority
    • Complete Annex VIII Section C
    • Upload FRIA summary
  6. Notify workers (Art 26) if workplace deployment
    • Inform workers’ representatives
    • Inform affected workers
  7. Prepare monitoring procedures (Art 26)
    • Define how to monitor performance
    • Establish incident reporting process

During Deployment

  1. Use according to instructions (Art 26)
  2. Ensure input data quality (Art 26) if controlling inputs
  3. Monitor system operation (Art 26) continuously
  4. Keep logs (Art 26) for at least 6 months
  5. Implement human oversight (Art 26) throughout use
  6. Notify individuals (Art 26, Art 50) affected by decisions

Incident Response

  1. Detect incidents through monitoring
  2. Report serious incidents (Art 26) immediately to:
    • Provider
    • Market surveillance authority
  3. Suspend use if system presents risk
  4. Cooperate with investigations

Common Compliance Pitfalls

MistakeConsequenceFix
Insufficient risk assessmentNon-compliance (Art 9), potential harmConduct thorough risk identification including foreseeable misuse
Biased training dataDiscriminatory outputs, non-compliance (Art 10)Examine datasets for bias, ensure representativeness
No human oversight capabilityNon-compliance (Art 14), inability to correct errorsDesign override/intervention mechanisms from start
Incomplete technical documentationFailed conformity assessment, authority rejectionUse Annex IV as checklist, document continuously
CE marking before registrationIncorrect sequence, potential penaltiesRegister in database FIRST, then affix CE marking
Deployer skips FRIANon-compliance (Art 27) for public authoritiesConduct FRIA before putting into service
Not reporting serious incidentsArticle 99 penalties, ongoing harmEstablish incident detection and reporting procedures
Substantial modification without new assessmentSystem becomes non-compliant, must withdrawTreat major changes as new system requiring full assessment

Penalties for Non-Compliance

ViolationFine (Large Companies)Fine (SMEs)
Non-compliance with Section 2 requirements (Art 8-15)Up to €15M or 3% global revenueUp to €3M or 3% global revenue
Breach of provider obligations (Art 16-22)Up to €15M or 3% global revenueUp to €3M or 3% global revenue
Breach of deployer obligations (Art 26-27)Up to €15M or 3% global revenueUp to €3M or 3% global revenue
Incorrect/incomplete information to authoritiesUp to €7.5M or 1.5% global revenueUp to €1.5M or 1.5% global revenue
Non-compliance with requests from authoritiesUp to €7.5M or 1.5% global revenueUp to €1.5M or 1.5% global revenue

Article 99 basis: Penalties apply per Article 71 (infringements of obligations).

Timeline Summary

DateMilestone
August 1, 2024AI Act enters into force
February 2, 2025Prohibited practices enforceable (Art 5)
August 2, 2026High-risk system requirements fully applicable (Art 8-50)
Before market placementConformity assessment, CE marking, registration must be complete
10 years after market placementDocumentation retention requirement expires

Interaction with Other Regulations

RegulationInteraction with AI Act
GDPRData governance (Art 10) must comply with GDPR; DPIAs may be integrated with FRIAs
Cybersecurity ActCybersecurity requirements (Art 15) align with EU cybersecurity certification
Product Safety RegulationCE marking and conformity assessment follow New Legislative Framework model
Accessibility DirectiveArt 16 requires compliance with EU 2016/2102 and EU 2019/882
DSATransparency obligations (Art 50) complement DSA requirements

Resources for Compliance

Official Sources

Standards and Specifications

  • JTC 21 standards: CEN-CENELEC website
  • Common specifications: To be published in Official Journal

Conformity Assessment

  • Notified bodies list: To be published in Commission database
  • Conformity assessment procedures: Annexes VI and VII of AI Act

Citation

Chapter III — High-Risk AI Systems, Regulation (EU) 2024/1689

Related:

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt