EU AI Act: High-Risk System Requirements
High-Risk AI System Requirements [Art 9]
Rule: High-risk AI systems must meet all these requirements before market placement:
Risk management {#risk-management}
| Requirement | What it means | Citation |
|---|---|---|
| Risk management | Ongoing risk identification and mitigation | Art 9 |
| Data governance | Training data must be relevant, representative, error-free | Art 10 |
| Technical documentation | Full documentation of design and operation | Art 11 |
| Record-keeping | Automatic logging of system operation | Art 12 |
| Transparency | Clear instructions for deployers | Art 13 |
| Human oversight | Ability for humans to intervene | Art 14 |
| Accuracy & robustness | Appropriate levels throughout lifecycle | Art 15 |
Citation: Articles 8-15
Conformity Assessment [Art 43]
Rule: Before placing high-risk AI on market, must complete conformity assessment.
Two routes:
- Self-assessment: For most high-risk systems using harmonised standards
- Third-party assessment: Required for biometric systems and some safety components
Must: Affix CE marking after successful assessment
Citation: Article 43
EU Database Registration [Art 49]
Rule: High-risk AI systems must be registered in EU database before market placement.
Required information:
- Provider details
- System description
- Intended purpose
- Status (on market, withdrawn, recalled)
Public access: Database is publicly searchable
Citation: Article 49
Source Text (Article 9 - Risk Management)
A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating…
The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Section…
Article 14(1) - Human Oversight: High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
Citation
Articles 8-15, EU AI Act (Regulation 2024/1689)