EU AI Act: Provider and Deployer Obligations for High-Risk AI Systems
Provider and Deployer Obligations for High-Risk AI Systems [Art 8-50]
Rule: Providers and deployers of high-risk AI systems must meet strict requirements covering risk management, data governance, documentation, conformity assessment, CE marking, registration, and ongoing monitoring. Compliance mandatory before market placement.
Effective: August 2, 2026 (high-risk system requirements)
Overview
Chapter III of the AI Act establishes a comprehensive regulatory framework for high-risk AI systems (listed in Annex III). The framework follows the “New Legislative Framework” model used for product safety regulation across the EU.
Key principle: Only compliant high-risk AI systems may be placed on EU market or put into service.
Chapter III Structure
| Section | Articles | Coverage |
|---|---|---|
| Section 1 | 6-7 | Classification of high-risk AI systems |
| Section 2 | 8-15 | Technical requirements for high-risk systems |
| Section 3 | 16-27 | Obligations of providers, deployers, and supply chain actors |
| Section 4 | 28-39 | Notifying authorities and notified bodies |
| Section 5 | 40-50 | Standards, conformity assessment, CE marking, registration |
Section 2: Requirements for High-Risk AI Systems [Art 8-15]
All high-risk AI systems must meet these technical requirements before market placement.
Article 8: Compliance with Requirements
Obligation: High-risk AI systems must comply with ALL requirements in Section 2.
Design principle: Requirements apply taking into account:
- Intended purpose
- State of the art
- Reasonably foreseeable risks
Continuous compliance: Requirements apply throughout system lifecycle, including after deployment.
Article 9: Risk Management System
9.1 — Core Requirement
Providers must establish, implement, document, and maintain a continuous, iterative risk management system throughout the AI system lifecycle.
9.2 — Risk Identification and Analysis
Risk management system must identify and analyze known and reasonably foreseeable risks related to:
| Risk Category | Examples |
|---|---|
| Health risks | Misdiagnosis in medical AI, incorrect drug recommendations |
| Safety risks | Autonomous vehicle accidents, industrial robot malfunctions |
| Fundamental rights | Discrimination, privacy violations, freedom of expression |
Scope includes:
- Intended use: Risks when used as provider designed
- Reasonably foreseeable misuse: Risks from predictable incorrect use
9.3 — Risk Assessment
For each identified risk:
- Estimate magnitude and likelihood
- Evaluate against acceptable risk thresholds
- Prioritize for mitigation
9.4 — Risk Mitigation Measures
Hierarchy of controls:
| Priority | Approach | Example |
|---|---|---|
| 1. Elimination | Design out the risk | Remove feature causing discrimination |
| 2. Minimization | Reduce risk through technical measures | Improve data quality, add constraints |
| 3. Information | Warn users about residual risks | Instructions for use, training requirements |
Balance: Mitigation measures should not degrade system performance beyond what’s necessary for safety.
9.5 — Post-Market Monitoring Integration
Risk management must incorporate learnings from:
- Incident reports
- User feedback
- Post-market surveillance data
Continuous improvement: Update risk assessment when new information emerges.
9.6 — Testing Requirements
Risk management system must include:
- Testing of identified risks
- Testing with real or representative data
- Validation that mitigation measures work
Article 10: Data and Data Governance
10.1 — Training Data Requirements
Quality criteria:
- Relevant: Data corresponds to system’s intended purpose
- Representative: Covers full range of use cases and user populations
- Free from errors: Data cleaning and validation processes
- Complete: No critical data gaps
Bias mitigation: Training, validation, and testing datasets must be:
- Examined for possible biases
- Appropriate statistical properties considered
- Representative of persons/groups on whom system will be used
10.2 — Data Governance Practices
Providers must implement data management practices including:
| Practice | Description |
|---|---|
| Design choices | Decisions on data collection, preprocessing, formulation |
| Data collection | How data is obtained, from what sources |
| Data preparation | Cleaning, labeling, enrichment operations |
| Formulation assumptions | Mathematical representations, feature engineering |
| Assessment methods | How data quality is validated |
| Examination for biases | Systematic checks for unfair patterns |
10.3 — Special Categories of Data
Prohibited unless exception applies: Processing of special categories of personal data (race, ethnicity, political opinions, religion, health, sexual orientation) to correct biases is permitted ONLY when:
- Technically strictly necessary
- Appropriate safeguards in place
- Compliant with GDPR Article 9
10.4 — Data Retention
Training, validation, and testing datasets must be retained or documented throughout system lifecycle for:
- Conformity assessment
- Market surveillance
- Post-market monitoring
Article 11: Technical Documentation
11.1 — Documentation Requirement
Providers must draw up technical documentation BEFORE placing system on market.
Purpose: Demonstrate system complies with AI Act requirements.
11.2 — Contents (Annex IV)
Technical documentation must include:
| Section | Contents |
|---|---|
| 1. General description | Intended purpose, developer, versions, lifecycle |
| 2. System description | Architecture, algorithms, outputs, data |
| 3. Detailed specs | Computational resources, measures for accuracy/robustness/cybersecurity |
| 4. Data | Training/validation/testing datasets, data governance, bias examination |
| 5. Risk management | Risk management system documentation, identified risks, mitigation measures |
| 6. Human oversight | Measures enabling human oversight, role descriptions |
| 7. Accuracy/robustness | Metrics, test results, performance under stress |
| 8. Cybersecurity | Security measures, vulnerability assessments |
| 9. Quality management | Quality management system documentation |
| 10. Changes and updates | Log of all substantial modifications |
| 11. Assessment | Results of conformity assessment, certificates |
Update requirement: Documentation must be kept current throughout system lifecycle.
11.3 — Retention Period
Technical documentation must be kept for 10 years after system placed on market or put into service.
Article 12: Record-Keeping (Automatic Logging)
12.1 — Logging Requirement
High-risk AI systems must have capabilities to automatically generate logs throughout system lifetime.
Minimum retention: Logs must be kept for at least 6 months (or longer based on applicable law).
12.2 — Logged Events
Logs must enable:
- Traceability of system functioning
- Monitoring throughout lifecycle
- Post-market surveillance
12.3 — What to Log
| Event Type | Examples |
|---|---|
| System activation | When system starts/stops operating |
| Input data | What data system received |
| Outputs | Decisions, recommendations, predictions made |
| User interactions | Human oversight actions |
| Anomalies | Errors, malfunctions, unexpected behavior |
Purpose: Enable investigation of incidents, discrimination claims, performance issues.
12.4 — Log Access and Protection
Cybersecurity: Logs must be protected against tampering and unauthorized access.
GDPR compliance: Personal data in logs must comply with data protection requirements.
Article 13: Transparency and Information to Deployers
13.1 — Instructions for Use
Providers must provide clear, comprehensive, and easily accessible instructions for use in appropriate languages.
13.2 — Mandatory Information
Instructions must contain:
| Information | Details |
|---|---|
| Identity and contact | Provider name, address, authorized representative |
| Intended purpose | Specific use for which system is designed |
| Level of accuracy | Metrics, expected performance, limitations |
| Robustness measures | How system handles errors, stress, attacks |
| Known limitations | Scenarios where system may fail |
| Foreseeable misuse | How system might be misused and risks |
| Human oversight | How to implement effective oversight |
| Computational resources | Hardware/software requirements |
| Lifespan | Expected operational lifetime, maintenance needs |
13.3 — Characteristics of System
Instructions must specify:
- Input data requirements (format, quality, relevance)
- Output interpretation (what results mean, confidence levels)
- Changes from previous versions (if applicable)
Article 14: Human Oversight
14.1 — Core Requirement
High-risk AI systems must be designed to enable effective human oversight during use.
Purpose: Prevent or minimize risks to health, safety, or fundamental rights.
14.2 — Human Oversight Measures
Systems must enable oversight persons to:
| Capability | Description |
|---|---|
| Understand | Fully comprehend system capabilities and limitations |
| Awareness | Remain aware of automation bias tendency |
| Interpret | Correctly interpret system output |
| Override | Decide not to use system or disregard/reverse output |
| Intervene | Interrupt system operation immediately |
14.3 — Interface Requirements
Human-machine interface must:
- Present information in clear, understandable format
- Enable real-time monitoring
- Provide override/emergency stop mechanisms
- Alert human when intervention needed
14.4 — Competence Requirements
Oversight persons must have:
- Appropriate competence
- Training specific to the system
- Authority to act on decisions
Article 15: Accuracy, Robustness, and Cybersecurity
15.1 — Accuracy Requirements
Obligation: Systems must achieve appropriate level of accuracy throughout lifecycle.
Metrics: Accuracy must be:
- Declared by provider in technical documentation
- Measured using appropriate metrics for system type
- Maintained within declared ranges during operation
15.2 — Robustness Requirements
Systems must be resilient against:
| Threat | Mitigation |
|---|---|
| Errors | Error handling, graceful degradation |
| Faults | Redundancy, failsafes |
| Inconsistencies | Input validation, anomaly detection |
| Adversarial attacks | Defensive mechanisms, input sanitization |
15.3 — Cybersecurity Requirements
Security by design: Systems must be secured against unauthorized access, modification, or data theft.
Technical measures:
- Authentication and authorization
- Encryption of data in transit and at rest
- Secure logging
- Vulnerability management
- Incident response procedures
State of the art: Security measures must reflect current best practices.
Section 3: Obligations of Providers and Deployers [Art 16-27]
Article 16: Obligations of Providers
Providers of high-risk AI systems must:
16.1 — Compliance and Quality Management
- Ensure system complies with ALL Section 2 requirements (Art 8-15)
- Establish quality management system per Article 17
- Keep technical documentation per Article 18
- Preserve automatically generated logs under their control (Art 19)
16.2 — Market Access
- Complete conformity assessment before placing on market (Art 43)
- Draw up EU declaration of conformity (Art 47)
- Affix CE marking on system (Art 48)
- Register system in EU database (Art 49)
16.3 — Identification
- Display provider name, registered trade name/trademark, contact address on system or packaging/documentation
16.4 — Post-Market Obligations
- Take corrective actions when necessary (Art 20)
- Inform authorities of non-compliance or risks (Art 20)
- Cooperate with competent authorities (Art 21)
- Ensure compliance with accessibility directives (EU 2016/2102, EU 2019/882)
16.5 — Demonstration of Conformity
- Upon request, demonstrate conformity to national authorities in language they understand
Article 17: Quality Management System
17.1 — Core Requirement
Providers must establish, document, implement, and maintain a quality management system ensuring compliance with AI Act.
17.2 — System Components
Quality management system must systematically address:
| Component | Coverage |
|---|---|
| Strategy | Regulatory compliance plan, resource allocation |
| Design & development | Requirements specification, design controls, verification |
| Technical documentation | Documentation procedures, version control |
| Quality control | Testing, validation, acceptance criteria |
| Post-market monitoring | Surveillance plan, incident handling |
| Communication | Procedures for information to authorities and users |
| Accountability | Assignment of responsibilities, management oversight |
17.3 — Documented Procedures
Must include written procedures for:
- Change management (handling substantial modifications)
- Corrective and preventive actions
- Risk management integration
- Conformity assessment coordination
Article 18: Documentation Keeping
18.1 — Retention Obligation
Providers must keep ALL required documentation available for national authorities for 10 years after:
- System placed on market, OR
- System put into service
18.2 — Documents Covered
- Technical documentation (Annex IV)
- EU declaration of conformity (Art 47)
- Quality management system documentation (Art 17)
- Certificates from notified bodies (if applicable)
- Updates and modifications log
18.3 — Format and Accessibility
Documentation must be:
- In format easily accessible to authorities
- In language(s) understandable by authorities
- Maintained even if provider ceases operations (through successor arrangements)
Article 19: Automatically Generated Logs
19.1 — Provider Obligation
Providers must keep logs automatically generated by high-risk AI system under their control (e.g., cloud-based systems).
Minimum retention: In accordance with Article 12, at least 6 months.
19.2 — Access for Authorities
Providers must make logs available to market surveillance authorities upon request.
Article 20: Corrective Actions and Duty of Information
20.1 — Non-Compliance Discovery
When provider has reason to believe high-risk system does not conform to AI Act:
Immediate actions:
- Take necessary corrective actions to bring into conformity
- Withdraw or recall system if appropriate
- Inform deployers and distributors
20.2 — Serious Incident or Malfunctioning
When provider has reason to believe system presents a risk:
Immediate notification to:
- Market surveillance authorities in Member States where system available
- Notified body that issued certificate (if applicable)
Information to provide:
- Description of non-conformity and corrective actions taken
- Identification of systems affected
- Geographic scope (where systems available)
20.3 — Serious Incidents
Definition: Incident that directly or indirectly leads to:
- Death or serious health damage
- Serious disruption of critical infrastructure
Notification timeline: Immediately after establishing causal link between AI system and incident.
Article 21: Cooperation with Competent Authorities
21.1 — Cooperation Duty
Upon reasoned request, providers must provide authorities with:
- Information and documentation to demonstrate conformity
- Access to logs
- Cooperation in corrective actions
Language: In language easily understood by authorities.
21.2 — Response Timeline
Providers must respond to authority requests without undue delay.
Article 22: Authorized Representatives
22.1 — When Required
Providers established outside EU must appoint authorized representative established in EU before placing system on market.
22.2 — Representative Responsibilities
Authorized representative:
- Acts on behalf of provider vis-à-vis authorities
- Holds documentation
- Cooperates with market surveillance
- Can be addressed by authorities instead of provider
22.3 — Mandate Requirements
Written mandate must specify tasks and empower representative to:
- Verify EU declaration of conformity and technical documentation are drawn up
- Keep documentation available for 10 years
- Provide authorities with information upon request
- Cooperate in corrective actions
Article 23: Obligations of Importers
23.1 — Who is Importer
Entity that places on EU market a high-risk AI system bearing name/trademark of entity established outside EU.
23.2 — Key Obligations
Importers must:
- Verify provider completed conformity assessment
- Verify technical documentation available
- Verify system bears CE marking and accompanied by documentation
- Verify provider and authorized representative identified
- Indicate own name, address on system or packaging
- Ensure storage/transport conditions don’t jeopardize compliance
- Keep register of non-conforming or recalled systems
- Inform provider and authorities if system presents risk
23.3 — Market Surveillance Role
Importers act as bridge between non-EU providers and EU authorities.
Article 24: Obligations of Distributors
24.1 — Who is Distributor
Entity in supply chain (other than provider or importer) that makes high-risk AI system available on EU market.
24.2 — Key Obligations
Distributors must:
- Verify system bears CE marking
- Verify accompanied by documentation and instructions
- Verify provider and importer identified
- Ensure storage/transport conditions don’t jeopardize compliance
- Inform provider/importer if system doesn’t appear to conform
- Not place on market if doesn’t comply
- Inform provider/importer and authorities if system presents risk
Article 25: Responsibilities Along the AI Value Chain
25.1 — Role Transformation
An entity may change roles:
| Scenario | New Role | Reason |
|---|---|---|
| Importer modifies system | Provider | Substantial modification triggers provider obligations |
| Distributor modifies system | Provider | Substantial modification triggers provider obligations |
| Deployer modifies system substantially | Provider | Making significant changes to purpose/functionality |
| Provider uses own system | Also Deployer | Using system under own authority |
25.2 — Substantial Modification
Definition: Change after market placement that affects compliance with requirements or intended purpose.
Triggers provider obligations:
- New conformity assessment
- New CE marking
- New registration
Examples:
- Changing algorithm significantly
- Retraining on fundamentally different dataset
- Changing intended purpose
- Modifying outputs or decision logic
Article 26: Obligations of Deployers
Deployers of high-risk AI systems must:
26.1 — Compliance with Instructions
- Use system in accordance with instructions for use
- Implement technical and organizational measures specified
- Ensure system used only for intended purpose
26.2 — Human Oversight
- Assign natural persons to human oversight roles
- Ensure oversight persons have:
- Necessary competence
- Appropriate training
- Sufficient authority
- Necessary support
26.3 — Input Data Management
When deployer controls input data:
- Ensure data is relevant for intended purpose
- Ensure data is sufficiently representative
26.4 — Monitoring and Logging
- Monitor operation based on instructions for use
- Retain automatically generated logs for:
- At least 6 months, OR
- Longer period as applicable law requires
26.5 — Risk and Incident Reporting
Serious incidents:
- Immediately inform provider, importer, or distributor
- Immediately inform relevant market surveillance authorities
Suspected risks:
- Inform provider of suspected risks
- Cease use if presenting risk
26.6 — Workplace Notification (Employers)
Before deployment:
- Inform workers’ representatives of high-risk AI system use
- Inform affected workers they will be subject to system
26.7 — Public Authority Obligations
Additional requirements for public authorities/Union institutions:
- Conduct fundamental rights impact assessment before use (Art 27)
- Register use in EU database (Art 49(3))
- Comply with registration requirements
- Ensure system registered before putting into service
26.8 — Notification to Individuals
For high-risk systems making/assisting decisions on natural persons:
- Inform individuals they are subject to use of high-risk AI system
Exceptions:
- Law enforcement purposes (when notification would harm investigation)
26.9 — Law Enforcement - Remote Biometric Identification
Additional safeguards for post-remote biometric identification:
- Obtain authorization from judicial or administrative authority
- Complete fundamental rights impact assessment
- Annual reporting on use to competent authorities
Authorization timing:
- Prior to use (preferred), OR
- Within 48 hours of use (urgent cases)
Judicial review: Authorization decisions subject to judicial remedy.
Article 27: Fundamental Rights Impact Assessment (FRIA)
27.1 — Who Must Conduct
Mandatory for:
- Public authorities
- Union institutions/bodies
- Private entities providing public services
Before putting high-risk AI system into service.
27.2 — Assessment Contents
FRIA must include:
| Component | Description |
|---|---|
| Description | Deployer’s processes where AI will be used |
| Deployment period | How long system will operate |
| Categories of persons | Who will be affected and how |
| Specific risks | Risks to fundamental rights identified |
| Purpose | Why system is being deployed |
| Benefits | Expected benefits justifying use |
| Mitigation measures | How risks will be addressed |
| Complementary measures | Procedural safeguards, human oversight |
27.3 — Consultation Requirements
Deployers should consult:
- Affected persons or representatives
- Independent experts
- Data protection officer (if processing personal data)
- Workers’ representatives (workplace deployment)
27.4 — Documentation and Registration
- Document FRIA systematically
- Upload summary to EU database (Art 49(3))
- Make available to market surveillance authorities
- Integrate with GDPR DPIA if processing personal data
27.5 — Update Requirement
FRIA must be updated when:
- System modified substantially
- Use changes significantly
- New risks emerge from monitoring
Section 4: Notifying Authorities and Notified Bodies [Art 28-39]
Overview: Conformity Assessment Ecosystem
For certain high-risk AI systems (biometric identification/categorization, critical infrastructure), third-party conformity assessment by notified bodies is required.
Process:
- Member States designate notifying authorities
- Notifying authorities approve notified bodies
- Notified bodies conduct conformity assessments
- Commission maintains public list of notified bodies
Article 28: Notifying Authorities
Each Member State must designate a notifying authority responsible for:
- Receiving applications from conformity assessment bodies
- Assessing bodies’ competence
- Notifying approved bodies to Commission
- Monitoring notified bodies
Articles 29-33: Notified Body Requirements
Requirements for Notified Bodies
Must demonstrate:
- Independence and impartiality
- Technical competence in AI systems
- Resources (personnel, equipment)
- Procedures for conformity assessment
- Insurance coverage for liability
- Absence of conflicts of interest
Application and Notification Process
- Conformity assessment body applies to notifying authority
- Notifying authority assesses against requirements (Art 31)
- If approved, authority notifies Commission and other Member States
- Commission publishes body in Official Journal and database
Articles 34-39: Notified Body Operations
Operational obligations:
- Conduct conformity assessments according to procedures
- Ensure confidentiality
- Issue certificates only if requirements met
- Withdraw certificates if non-compliance discovered
- Report to notifying authority on activities
- Cooperate with other notified bodies
Coordination:
- Notified bodies must coordinate through sectoral groups
- Share best practices, harmonize assessment approaches
Section 5: Standards, Conformity Assessment, CE Marking [Art 40-50]
Article 40: Harmonized Standards
40.1 — Presumption of Conformity
High-risk AI systems complying with harmonized standards published in Official Journal are presumed to conform to requirements those standards cover.
Benefit: Streamlined compliance demonstration.
40.2 — Standard Development
European standardization organizations (CEN, CENELEC, ETSI):
- Develop harmonized standards at Commission request
- Cover technical requirements (Art 8-15)
- Published in Official Journal after Commission approval
Joint Technical Committee 21 (JTC 21): Leads AI standardization for AI Act compliance.
Timeline: Standards expected before August 2, 2026 for key areas.
40.3 — Voluntary Compliance
Use of harmonized standards is voluntary. Providers may demonstrate conformity through other means.
Article 41: Common Specifications
41.1 — When Common Specifications Apply
If harmonized standards are:
- Not available
- Insufficient
- Delayed
Commission may adopt common specifications via implementing acts.
41.2 — Mandatory vs. Voluntary
If harmonized standard exists: Common specifications voluntary (alternative).
If no harmonized standard: Common specifications may be made mandatory for presumption of conformity.
Article 42: Presumption of Conformity
Systems complying with harmonized standards or common specifications are presumed to conform to requirements those standards/specifications cover.
Authorities may still:
- Request additional evidence
- Investigate complaints
- Challenge conformity if evidence of non-compliance
Article 43: Conformity Assessment
43.1 — General Requirement
Providers must ensure high-risk AI systems undergo conformity assessment before placing on market or putting into service.
43.2 — Assessment Procedures
Internal control (Annex VI):
- For most high-risk systems in Annex III points 2-8
- Provider self-assesses conformity
- No notified body involvement
With notified body (Annex VII):
- For biometric systems (Annex III point 1) - OPTIONAL
- For systems not using harmonized standards
Exception - law enforcement:
- For biometric systems used by law enforcement/migration/asylum authorities
- Market surveillance authority acts as notified body
43.3 — Substantial Modification
Systems undergoing substantial modification must undergo new conformity assessment.
Definition: Modification after market placement affecting:
- Compliance with requirements
- Intended purpose
Article 44-46: Certificates and Derogations
Article 44: Certificates issued by notified bodies valid for up to 5 years, renewable.
Article 45: Appeal procedures must exist for applicants challenging notified body decisions.
Article 46: Derogation for testing - exceptionally, authorities may authorize market placement for testing before full conformity assessment.
Article 47: EU Declaration of Conformity
47.1 — Declaration Requirement
Provider must draw up written EU declaration of conformity for each high-risk AI system.
47.2 — Contents (Annex V)
Declaration must state:
- Provider name and address
- System name, type, version
- Declaration that system complies with AI Act
- References to harmonized standards or common specifications used
- Notified body details (if applicable)
- Signature, date, place
47.3 — Format and Language
- Machine-readable format (electronic)
- Physical or electronically signed
- Translated into language(s) required by Member States
47.4 — Retention
Declaration must be kept available for 10 years after system placed on market.
Article 48: CE Marking
48.1 — Marking Requirement
High-risk AI systems meeting requirements must bear CE marking before placing on market.
48.2 — CE Marking Rules
- Affixed visibly, legibly, indelibly on system or packaging/documentation
- Followed by identification number of notified body (if third-party assessment)
- Digital CE marking for software products (accessible in user interface or documentation)
48.3 — Meaning of CE Marking
CE marking indicates:
- System complies with AI Act
- Conformity assessment completed
- Provider assumes full responsibility
48.4 — Prohibition
Cannot affix CE marking if system doesn’t comply. False CE marking = Article 99 penalties.
Article 49: Registration
Covered in detail in database-registration.md. See that document for full registration requirements.
Summary:
- Providers must register in EU database before market placement
- Deployers (public authorities) must register before putting into service
- Registration creates public transparency
Article 50: Transparency Obligations for Certain AI Systems
50.1 — Emotion Recognition and Biometric Categorization
Deployers must inform natural persons exposed to:
- Emotion recognition systems, OR
- Biometric categorization systems
Timing: Before exposure occurs.
Format: Clear, understandable language.
Exception: Law enforcement when notification would prejudice investigation.
50.2 — AI-Generated Content (Deepfakes)
Deployers of AI systems generating synthetic content must:
- Disclose that content has been artificially generated or manipulated
- Mark content in machine-readable format
- Make disclosure detectable
Applies to:
- Deep fake images, audio, video
- Synthetic text purporting to be authentic
- Manipulated content
Exceptions:
- AI-assisted editing within scope of creative freedom
- Content detection/prevention systems
- Administrative, legal proceedings (authorized use)
50.3 — Chatbots and Conversational AI
Deployers of AI systems interacting with natural persons must:
- Inform individuals they are interacting with AI system
- Disclosure must be made at first interaction
Unless:
- Obvious from circumstances
- Authorized law enforcement use
Compliance Workflow for Providers
Phase 1: Design and Development
- Classify system (Art 6) — Is it high-risk per Annex III?
- Establish quality management system (Art 17)
- Implement risk management system (Art 9)
- Identify health/safety/fundamental rights risks
- Assess reasonably foreseeable misuse
- Implement mitigation measures
- Implement data governance (Art 10)
- Ensure training data relevant, representative, free from bias
- Document data management practices
- Design for human oversight (Art 14)
- Enable understanding, override, intervention
- Ensure accuracy, robustness, cybersecurity (Art 15)
- Define accuracy metrics
- Implement error handling
- Secure against attacks
- Implement automatic logging (Art 12)
- Log all relevant events
- Ensure tamper-proof storage
- Draft technical documentation (Art 11, Annex IV)
- Complete all required sections
- Keep under version control
Phase 2: Pre-Market Assessment
- Prepare instructions for use (Art 13)
- Clear, comprehensive, in appropriate languages
- Include all mandatory information
- Conduct conformity assessment (Art 43)
- Internal control (Annex VI), OR
- With notified body if required (Annex VII)
- Draw up EU declaration of conformity (Art 47, Annex V)
- All mandatory information
- Signed and dated
- Affix CE marking (Art 48)
- On system, packaging, or documentation
- Include notified body number if applicable
Phase 3: Market Entry
- Register in EU database (Art 49) BEFORE market placement
- Complete Annex VIII Section A
- Upload declaration and instructions
- Identify as provider (Art 16)
- Display name, address on system
- Appoint authorized representative (Art 22) if established outside EU
- Place on market - System may now be sold/distributed
Phase 4: Post-Market
- Keep documentation (Art 18) for 10 years
- Implement post-market monitoring (Art 72) - continuous
- Keep logs (Art 19) accessible for authorities
- Report serious incidents (Art 20) immediately
- Take corrective actions (Art 20) when needed
- Update registration (Art 49) upon changes
- Cooperate with authorities (Art 21) upon request
Compliance Workflow for Deployers
Before Deployment
- Select compliant system - Verify CE marking, registration
- Review instructions for use (Art 13) - Understand requirements
- Assign human oversight (Art 26) - Competent, trained personnel
- Conduct FRIA (Art 27) if public authority
- Assess fundamental rights impacts
- Document mitigation measures
- Consult stakeholders
- Register in EU database (Art 49(3)) if public authority
- Complete Annex VIII Section C
- Upload FRIA summary
- Notify workers (Art 26) if workplace deployment
- Inform workers’ representatives
- Inform affected workers
- Prepare monitoring procedures (Art 26)
- Define how to monitor performance
- Establish incident reporting process
During Deployment
- Use according to instructions (Art 26)
- Ensure input data quality (Art 26) if controlling inputs
- Monitor system operation (Art 26) continuously
- Keep logs (Art 26) for at least 6 months
- Implement human oversight (Art 26) throughout use
- Notify individuals (Art 26, Art 50) affected by decisions
Incident Response
- Detect incidents through monitoring
- Report serious incidents (Art 26) immediately to:
- Provider
- Market surveillance authority
- Suspend use if system presents risk
- Cooperate with investigations
Common Compliance Pitfalls
| Mistake | Consequence | Fix |
|---|---|---|
| Insufficient risk assessment | Non-compliance (Art 9), potential harm | Conduct thorough risk identification including foreseeable misuse |
| Biased training data | Discriminatory outputs, non-compliance (Art 10) | Examine datasets for bias, ensure representativeness |
| No human oversight capability | Non-compliance (Art 14), inability to correct errors | Design override/intervention mechanisms from start |
| Incomplete technical documentation | Failed conformity assessment, authority rejection | Use Annex IV as checklist, document continuously |
| CE marking before registration | Incorrect sequence, potential penalties | Register in database FIRST, then affix CE marking |
| Deployer skips FRIA | Non-compliance (Art 27) for public authorities | Conduct FRIA before putting into service |
| Not reporting serious incidents | Article 99 penalties, ongoing harm | Establish incident detection and reporting procedures |
| Substantial modification without new assessment | System becomes non-compliant, must withdraw | Treat major changes as new system requiring full assessment |
Penalties for Non-Compliance
| Violation | Fine (Large Companies) | Fine (SMEs) |
|---|---|---|
| Non-compliance with Section 2 requirements (Art 8-15) | Up to €15M or 3% global revenue | Up to €3M or 3% global revenue |
| Breach of provider obligations (Art 16-22) | Up to €15M or 3% global revenue | Up to €3M or 3% global revenue |
| Breach of deployer obligations (Art 26-27) | Up to €15M or 3% global revenue | Up to €3M or 3% global revenue |
| Incorrect/incomplete information to authorities | Up to €7.5M or 1.5% global revenue | Up to €1.5M or 1.5% global revenue |
| Non-compliance with requests from authorities | Up to €7.5M or 1.5% global revenue | Up to €1.5M or 1.5% global revenue |
Article 99 basis: Penalties apply per Article 71 (infringements of obligations).
Timeline Summary
| Date | Milestone |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Prohibited practices enforceable (Art 5) |
| August 2, 2026 | High-risk system requirements fully applicable (Art 8-50) |
| Before market placement | Conformity assessment, CE marking, registration must be complete |
| 10 years after market placement | Documentation retention requirement expires |
Interaction with Other Regulations
| Regulation | Interaction with AI Act |
|---|---|
| GDPR | Data governance (Art 10) must comply with GDPR; DPIAs may be integrated with FRIAs |
| Cybersecurity Act | Cybersecurity requirements (Art 15) align with EU cybersecurity certification |
| Product Safety Regulation | CE marking and conformity assessment follow New Legislative Framework model |
| Accessibility Directive | Art 16 requires compliance with EU 2016/2102 and EU 2019/882 |
| DSA | Transparency obligations (Art 50) complement DSA requirements |
Resources for Compliance
Official Sources
- AI Act full text: EUR-Lex
- AI Act Service Desk: Official Q&A
- Commission guidance: Digital Strategy portal
Standards and Specifications
- JTC 21 standards: CEN-CENELEC website
- Common specifications: To be published in Official Journal
Conformity Assessment
- Notified bodies list: To be published in Commission database
- Conformity assessment procedures: Annexes VI and VII of AI Act
Citation
Chapter III — High-Risk AI Systems, Regulation (EU) 2024/1689
Related: