EU AI Act: Risk Classification & Prohibited Practices
Risk Classification [Art 5]
Tier 1: Unacceptable Risk (Prohibited) [Art 5]
These AI uses are banned:
| Prohibited Practice | Example | Citation |
|---|---|---|
| Social scoring | Government scoring citizens for social behaviour | Art 5(1)(c) |
| Subliminal manipulation | AI that exploits vulnerabilities to cause harm | Art 5(1)(a) |
| Real-time biometric identification | Live facial recognition in public (with exceptions) | Art 5(1)(h) |
| Emotion recognition at work/school | AI detecting emotions of employees/students | Art 5(1)(f) |
| Predictive policing (individuals) | Predicting individual crime risk from profiling | Art 5(1)(d) |
| Untargeted facial scraping | Building face databases from internet/CCTV | Art 5(1)(e) |
Effective: February 2025
Tier 2: High Risk [Art 6, Annex III]
Must comply with strict requirements if AI used in:
| Domain | Examples | Citation |
|---|---|---|
| Biometrics | Facial recognition, emotion detection | Annex III(1) |
| Critical infrastructure | Energy, water, transport systems | Annex III(2) |
| Education | Admissions, grading, proctoring | Annex III(3) |
| Employment | CV screening, performance monitoring | Annex III(4) |
| Essential services | Credit scoring, insurance pricing | Annex III(5) |
| Law enforcement | Risk assessment, evidence analysis | Annex III(6) |
| Migration/asylum | Border control, visa processing | Annex III(7) |
| Justice | Legal research influencing outcomes | Annex III(8) |
Tier 3: Limited Risk [Art 50]
Transparency obligations only:
- Chatbots must disclose they are AI
- AI-generated content must be labelled
- Emotion recognition systems must inform users
- Deepfakes must be disclosed
Tier 4: Minimal Risk
No specific requirements — most AI systems fall here (spam filters, games, etc.)
Source Text (Article 5 - Prohibited Practices)
- The following AI practices shall be prohibited:
(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of, materially distorting the behaviour of a person…
(c) the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment… (ii) detrimental or unfavourable treatment… that is unjustified or disproportionate…
(h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary [for specific listed purposes]…
Citation
Article 5, EU AI Act (Regulation 2024/1689)