EU AI Act: Prohibited AI Practices
Prohibited AI Practices [Art 5]
Rule: Eight categories of AI systems are absolutely prohibited in the EU due to unacceptable risks to fundamental rights and EU values. Violations carry fines up to €35M or 7% of global revenue.
Effective: February 2, 2025
Overview of Prohibition
Article 5 establishes “red lines” for AI systems that:
- Violate fundamental rights protected by the EU Charter
- Contravene EU values of human dignity and democracy
- Pose unacceptable risks regardless of safeguards
No risk mitigation allowed: These systems cannot be made compliant through technical measures.
1. Manipulative and Deceptive Techniques [Art 5.1(a)]
Prohibition
AI systems deploying subliminal techniques beyond a person’s consciousness OR purposefully manipulative or deceptive techniques that:
- Materially distort behavior
- Appreciably impair ability to make informed decisions
- Cause or likely cause significant harm
####Examples
| Prohibited | Description |
|---|---|
| Dark patterns | AI-generated interfaces designed to trick users into decisions against their interests |
| Subliminal audio/visual | Imperceptible stimuli influencing behavior (e.g., hidden messages in advertisements) |
| Behavioral manipulation | AI nudging people toward harmful actions (e.g., excessive gambling, dangerous challenges) |
| Deceptive chatbots | AI pretending to be human to manipulate emotional responses for commercial gain |
Key Terms
- Subliminal: Below threshold of conscious perception
- Material distortion: Significant change in behavior that wouldn’t have occurred otherwise
- Appreciable impairment: Substantial reduction in decision-making autonomy
Exceptions
None. All forms of manipulative/deceptive AI prohibited.
2. Exploitation of Vulnerabilities [Art 5.1(b)]
Prohibition
AI systems exploiting vulnerabilities of specific groups due to:
- Age (children, elderly)
- Disability (physical or mental)
- Socioeconomic situation
Where such exploitation materially distorts behavior causing or likely to cause significant harm.
Examples
| Prohibited | Description |
|---|---|
| Predatory lending | AI targeting financially vulnerable with harmful credit products |
| Child manipulation | AI exploiting children’s limited comprehension for commercial purposes |
| Elderly scams | AI-powered systems designed to confuse or mislead elderly users |
| Disability exploitation | AI taking advantage of cognitive impairments to extract consent |
Key Elements
Must prove:
- Exploitation of vulnerability (age/disability/socioeconomic)
- Material distortion of behavior
- Significant harm caused or likely
Exceptions
Legitimate accessibility tools that assist vulnerable groups (not exploit them) remain permitted.
3. Social Scoring [Art 5.1(c)]
Prohibition
AI systems for evaluation or classification of persons based on:
- Social behavior
- Known, inferred, or predicted personal or personality characteristics
Where such scoring leads to EITHER:
- Detrimental or unfavorable treatment in contexts unrelated to where data was generated/collected, OR
- Detrimental treatment that is unjustified or disproportionate to behavior or its gravity
Examples
| Prohibited | Description |
|---|---|
| Citizen trustworthiness scores | Government-run systems rating citizens for access to services |
| Cross-context penalization | Low social media activity score affecting loan eligibility |
| Behavioral reputation systems | Scoring based on lifestyle choices unrelated to specific context |
| Automated social exclusion | AI denying access based on personality traits or social connections |
What’s NOT Social Scoring
| Permitted | Reason |
|---|---|
| Credit scoring | Based on financial behavior in relevant context (lending) |
| Insurance risk assessment | Actuarial calculations using directly relevant data |
| Fraud detection | Assessing specific transaction risk, not general “trustworthiness” |
| Game reputation systems | Scoring within single-context environment |
Key Distinction
Context matters: Using workplace behavior to evaluate workplace performance = OK. Using workplace behavior to deny housing = PROHIBITED.
4. Predictive Policing (Risk Assessment) [Art 5.1(d)]
Prohibition
AI systems assessing risk of criminal offense based solely on:
- Profiling of a natural person, OR
- Assessing personality traits
Examples
| Prohibited | Description |
|---|---|
| Recidivism prediction based on demographics | AI predicting reoffending from age/race/neighborhood |
| Personality-based risk scores | Assessing criminality from psychological profiling alone |
| Predictive hotspot mapping | Identifying individuals as threats based on location patterns |
| Pre-crime systems | Flagging people as future criminals without specific evidence |
Exception: Human-Supported Assessment
Permitted: AI supporting human assessment of involvement in actual criminal activity using:
- Objective, verifiable facts
- Evidence-based analysis
- Human decision-maker has final authority
Example (OK): Police use AI to analyze CCTV footage to identify person matching witness description at crime scene.
Example (PROHIBITED): AI generates “high-risk offender” list based on demographic profiles.
Key Limitation
“Based solely on profiling” — if AI incorporates actual evidence of criminal involvement, not prohibited.
5. Facial Recognition Database Creation [Art 5.1(e)]
Prohibition
Creation or expansion of facial recognition databases through:
- Untargeted scraping of facial images from internet, OR
- Untargeted scraping from CCTV footage
Examples
| Prohibited | Description |
|---|---|
| Web scraping for faces | AI crawling social media/websites to build facial database |
| Indiscriminate CCTV harvesting | Collecting all faces from public cameras without specific purpose |
| Mass facial data collection | Building databases from publicly accessible images without consent |
What’s NOT Prohibited
| Permitted | Reason |
|---|---|
| Targeted collection for specific investigation | Lawful collection with defined purpose (e.g., missing person search) |
| Voluntarily submitted photos | Users providing images with informed consent |
| Law enforcement databases from arrests | Lawfully obtained biometric data in criminal justice context |
6. Emotion Recognition [Art 5.1(f)]
Prohibition
AI systems inferring emotions of natural persons in:
- Workplace settings
- Educational institutions
Examples
| Prohibited | Description |
|---|---|
| Employee mood monitoring | AI analyzing facial expressions/voice tone to track worker emotions |
| Student engagement tracking | Systems detecting boredom or attentiveness in classrooms |
| Hiring emotion assessment | Analyzing candidate emotions during job interviews |
| Performance reviews via emotion | Using inferred emotional states as performance metrics |
Exceptions
Medical or safety purposes:
| Permitted | Purpose |
|---|---|
| Driver fatigue detection | Preventing accidents by detecting drowsiness |
| Medical diagnosis support | Clinical assessment of emotional wellbeing |
| Mental health monitoring | Detecting distress in healthcare settings |
| Pilot alertness systems | Aviation safety monitoring |
Key Distinction
Context + Purpose: Emotion AI for productivity monitoring = PROHIBITED. Emotion AI for safety (fatigue detection) = OK.
7. Biometric Categorization [Art 5.1(g)]
Prohibition
AI systems categorizing individuals based on biometric data to deduce or infer:
- Race or ethnic origin
- Political opinions
- Trade union membership
- Religious or philosophical beliefs
- Sex life or sexual orientation
Exception: Labeling or filtering lawfully acquired biometric data for law enforcement purposes only.
Examples
| Prohibited | Description |
|---|---|
| Sexual orientation inference from faces | AI predicting LGBTQ+ identity from facial features |
| Race-based profiling | Automated classification by ethnicity for non-LE purposes |
| Religious belief detection | Inferring faith from appearance/clothing for commercial use |
| Political affiliation prediction | Deducing political views from biometric characteristics |
Law Enforcement Exception
Permitted ONLY for law enforcement:
- Filtering suspect images by physical characteristics in investigation
- Organizing lawfully obtained evidence by features
- Must have legal basis and fundamental rights assessment
NOT permitted: Private sector biometric categorization of protected characteristics.
8. Real-Time Remote Biometric Identification (Public Spaces) [Art 5.1(h)]
Prohibition
Real-time remote biometric identification (RBI) systems in publicly accessible spaces for law enforcement purposes.
Definition:
- Real-time: Identification without significant delay
- Remote: No physical interaction required
- Publicly accessible: Streets, parks, transport hubs, shops, etc.
Strict Exceptions
Law enforcement may use real-time RBI ONLY for:
| Exception | Conditions |
|---|---|
| 1. Victim searches | Targeted search for victims of abduction, trafficking, sexual exploitation, or missing persons |
| 2. Imminent threats | Prevention of specific, substantial, imminent threat to life/safety or terrorist attack |
| 3. Criminal suspects | Locating/identifying suspects of offenses punishable by ≥4 years imprisonment |
Mandatory Safeguards
Before deployment, law enforcement MUST:
-
Fundamental Rights Impact Assessment (FRIA)
- Assess proportionality
- Identify affected persons
- Evaluate alternatives
-
EU Database Registration
- Register system in centralized database
- Public transparency
-
Prior Judicial Authorization
- Court or independent authority approval
- Case-by-case basis
- Urgent cases: ex-post authorization
-
Targeted Use Only
- Limited to specific individuals
- Time/geographic limits
- Minimize false positives
What’s NOT Prohibited
- Post-event facial recognition (non-real-time) for investigating past crimes
- Border control biometrics (not “publicly accessible” under directive definition)
- Airport security checks (controlled access, not public space)
- Private premises (owner’s property, not “publicly accessible”)
Penalties [Art 99]
| Violation | Fine |
|---|---|
| Article 5 breach (prohibited practices) | Up to €35,000,000 OR 7% of total worldwide annual turnover (whichever higher) |
| Intentional violations | Higher end of penalty range |
| Repeated violations | Aggravating factor |
No de minimis exception: Applies to organizations of all sizes.
Enforcement Timeline
| Date | Milestone |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Article 5 prohibitions become enforceable |
| February 4-6, 2025 | European Commission publishes guidelines on prohibited practices |
| August 2, 2026 | Full AI Act enforcement (high-risk systems, conformity assessment, etc.) |
Compliance Checklist
Organizations should:
- Audit all AI systems for Article 5 prohibited practices
- Cease use of any prohibited systems immediately
- Review emotion recognition in workplace/education (unless medical/safety)
- Verify biometric systems don’t categorize by protected characteristics
- Check facial recognition doesn’t scrape internet/CCTV indiscriminately
- Ensure no social scoring across unrelated contexts
- Verify predictive policing uses objective facts, not profiling alone
- Eliminate manipulation/deception techniques in AI interactions
- Document safeguards for vulnerable group interactions
- If law enforcement: ensure real-time RBI has judicial authorization + FRIA
Interaction with Other Laws
| Law | Overlap with Article 5 |
|---|---|
| GDPR | Prohibited biometric processing reinforces GDPR Art 9 (special category data) |
| DSA | Manipulative dark patterns also regulated under Digital Services Act |
| Charter of Fundamental Rights | Article 5 implements Charter protections (dignity, non-discrimination, privacy) |
| National criminal law | Law enforcement exceptions subject to national procedural safeguards |
Gray Areas and Guidance
Is my system prohibited? Decision tree:
-
Is it one of the 8 categories?
- No → Not prohibited under Art 5 (may be high-risk under Title III)
- Yes → Continue to step 2
-
Does an exception apply?
- Medical/safety (emotion recognition)
- Law enforcement (biometric categorization, real-time RBI with safeguards)
- Human-supported assessment with objective facts (predictive policing)
-
If exception applies, are safeguards in place?
- Judicial authorization (RBI)
- Fundamental rights assessment
- Database registration
- Proportionality review
-
If no exception OR safeguards missing:
- PROHIBITED — cease use immediately
Common Misconceptions
| Myth | Reality |
|---|---|
| ”Small companies exempt from Article 5” | NO — applies to all operators placing AI on EU market |
| ”Can make prohibited AI compliant with safeguards” | NO — no risk mitigation possible for prohibited practices |
| ”Only applies to high-risk AI systems” | NO — prohibitions apply regardless of risk classification |
| ”Post-event facial recognition also banned” | NO — only real-time RBI in public spaces prohibited (with exceptions) |
| “Can use emotion AI if employees consent” | NO — workplace emotion recognition prohibited even with consent (unless safety) |
Citation
Article 5 — Prohibited AI Practices, Regulation (EU) 2024/1689
Related: