EU

EU AI Act: Prohibited AI Practices

Prohibited AI Practices [Art 5]

Rule: Eight categories of AI systems are absolutely prohibited in the EU due to unacceptable risks to fundamental rights and EU values. Violations carry fines up to €35M or 7% of global revenue.

Effective: February 2, 2025

Overview of Prohibition

Article 5 establishes “red lines” for AI systems that:

  • Violate fundamental rights protected by the EU Charter
  • Contravene EU values of human dignity and democracy
  • Pose unacceptable risks regardless of safeguards

No risk mitigation allowed: These systems cannot be made compliant through technical measures.

1. Manipulative and Deceptive Techniques [Art 5.1(a)]

Prohibition

AI systems deploying subliminal techniques beyond a person’s consciousness OR purposefully manipulative or deceptive techniques that:

  1. Materially distort behavior
  2. Appreciably impair ability to make informed decisions
  3. Cause or likely cause significant harm

####Examples

ProhibitedDescription
Dark patternsAI-generated interfaces designed to trick users into decisions against their interests
Subliminal audio/visualImperceptible stimuli influencing behavior (e.g., hidden messages in advertisements)
Behavioral manipulationAI nudging people toward harmful actions (e.g., excessive gambling, dangerous challenges)
Deceptive chatbotsAI pretending to be human to manipulate emotional responses for commercial gain

Key Terms

  • Subliminal: Below threshold of conscious perception
  • Material distortion: Significant change in behavior that wouldn’t have occurred otherwise
  • Appreciable impairment: Substantial reduction in decision-making autonomy

Exceptions

None. All forms of manipulative/deceptive AI prohibited.

2. Exploitation of Vulnerabilities [Art 5.1(b)]

Prohibition

AI systems exploiting vulnerabilities of specific groups due to:

  • Age (children, elderly)
  • Disability (physical or mental)
  • Socioeconomic situation

Where such exploitation materially distorts behavior causing or likely to cause significant harm.

Examples

ProhibitedDescription
Predatory lendingAI targeting financially vulnerable with harmful credit products
Child manipulationAI exploiting children’s limited comprehension for commercial purposes
Elderly scamsAI-powered systems designed to confuse or mislead elderly users
Disability exploitationAI taking advantage of cognitive impairments to extract consent

Key Elements

Must prove:

  1. Exploitation of vulnerability (age/disability/socioeconomic)
  2. Material distortion of behavior
  3. Significant harm caused or likely

Exceptions

Legitimate accessibility tools that assist vulnerable groups (not exploit them) remain permitted.

3. Social Scoring [Art 5.1(c)]

Prohibition

AI systems for evaluation or classification of persons based on:

  • Social behavior
  • Known, inferred, or predicted personal or personality characteristics

Where such scoring leads to EITHER:

  1. Detrimental or unfavorable treatment in contexts unrelated to where data was generated/collected, OR
  2. Detrimental treatment that is unjustified or disproportionate to behavior or its gravity

Examples

ProhibitedDescription
Citizen trustworthiness scoresGovernment-run systems rating citizens for access to services
Cross-context penalizationLow social media activity score affecting loan eligibility
Behavioral reputation systemsScoring based on lifestyle choices unrelated to specific context
Automated social exclusionAI denying access based on personality traits or social connections

What’s NOT Social Scoring

PermittedReason
Credit scoringBased on financial behavior in relevant context (lending)
Insurance risk assessmentActuarial calculations using directly relevant data
Fraud detectionAssessing specific transaction risk, not general “trustworthiness”
Game reputation systemsScoring within single-context environment

Key Distinction

Context matters: Using workplace behavior to evaluate workplace performance = OK. Using workplace behavior to deny housing = PROHIBITED.

4. Predictive Policing (Risk Assessment) [Art 5.1(d)]

Prohibition

AI systems assessing risk of criminal offense based solely on:

  • Profiling of a natural person, OR
  • Assessing personality traits

Examples

ProhibitedDescription
Recidivism prediction based on demographicsAI predicting reoffending from age/race/neighborhood
Personality-based risk scoresAssessing criminality from psychological profiling alone
Predictive hotspot mappingIdentifying individuals as threats based on location patterns
Pre-crime systemsFlagging people as future criminals without specific evidence

Exception: Human-Supported Assessment

Permitted: AI supporting human assessment of involvement in actual criminal activity using:

  • Objective, verifiable facts
  • Evidence-based analysis
  • Human decision-maker has final authority

Example (OK): Police use AI to analyze CCTV footage to identify person matching witness description at crime scene.

Example (PROHIBITED): AI generates “high-risk offender” list based on demographic profiles.

Key Limitation

“Based solely on profiling” — if AI incorporates actual evidence of criminal involvement, not prohibited.

5. Facial Recognition Database Creation [Art 5.1(e)]

Prohibition

Creation or expansion of facial recognition databases through:

  • Untargeted scraping of facial images from internet, OR
  • Untargeted scraping from CCTV footage

Examples

ProhibitedDescription
Web scraping for facesAI crawling social media/websites to build facial database
Indiscriminate CCTV harvestingCollecting all faces from public cameras without specific purpose
Mass facial data collectionBuilding databases from publicly accessible images without consent

What’s NOT Prohibited

PermittedReason
Targeted collection for specific investigationLawful collection with defined purpose (e.g., missing person search)
Voluntarily submitted photosUsers providing images with informed consent
Law enforcement databases from arrestsLawfully obtained biometric data in criminal justice context

6. Emotion Recognition [Art 5.1(f)]

Prohibition

AI systems inferring emotions of natural persons in:

  • Workplace settings
  • Educational institutions

Examples

ProhibitedDescription
Employee mood monitoringAI analyzing facial expressions/voice tone to track worker emotions
Student engagement trackingSystems detecting boredom or attentiveness in classrooms
Hiring emotion assessmentAnalyzing candidate emotions during job interviews
Performance reviews via emotionUsing inferred emotional states as performance metrics

Exceptions

Medical or safety purposes:

PermittedPurpose
Driver fatigue detectionPreventing accidents by detecting drowsiness
Medical diagnosis supportClinical assessment of emotional wellbeing
Mental health monitoringDetecting distress in healthcare settings
Pilot alertness systemsAviation safety monitoring

Key Distinction

Context + Purpose: Emotion AI for productivity monitoring = PROHIBITED. Emotion AI for safety (fatigue detection) = OK.

7. Biometric Categorization [Art 5.1(g)]

Prohibition

AI systems categorizing individuals based on biometric data to deduce or infer:

  • Race or ethnic origin
  • Political opinions
  • Trade union membership
  • Religious or philosophical beliefs
  • Sex life or sexual orientation

Exception: Labeling or filtering lawfully acquired biometric data for law enforcement purposes only.

Examples

ProhibitedDescription
Sexual orientation inference from facesAI predicting LGBTQ+ identity from facial features
Race-based profilingAutomated classification by ethnicity for non-LE purposes
Religious belief detectionInferring faith from appearance/clothing for commercial use
Political affiliation predictionDeducing political views from biometric characteristics

Law Enforcement Exception

Permitted ONLY for law enforcement:

  • Filtering suspect images by physical characteristics in investigation
  • Organizing lawfully obtained evidence by features
  • Must have legal basis and fundamental rights assessment

NOT permitted: Private sector biometric categorization of protected characteristics.

8. Real-Time Remote Biometric Identification (Public Spaces) [Art 5.1(h)]

Prohibition

Real-time remote biometric identification (RBI) systems in publicly accessible spaces for law enforcement purposes.

Definition:

  • Real-time: Identification without significant delay
  • Remote: No physical interaction required
  • Publicly accessible: Streets, parks, transport hubs, shops, etc.

Strict Exceptions

Law enforcement may use real-time RBI ONLY for:

ExceptionConditions
1. Victim searchesTargeted search for victims of abduction, trafficking, sexual exploitation, or missing persons
2. Imminent threatsPrevention of specific, substantial, imminent threat to life/safety or terrorist attack
3. Criminal suspectsLocating/identifying suspects of offenses punishable by ≥4 years imprisonment

Mandatory Safeguards

Before deployment, law enforcement MUST:

  1. Fundamental Rights Impact Assessment (FRIA)

    • Assess proportionality
    • Identify affected persons
    • Evaluate alternatives
  2. EU Database Registration

    • Register system in centralized database
    • Public transparency
  3. Prior Judicial Authorization

    • Court or independent authority approval
    • Case-by-case basis
    • Urgent cases: ex-post authorization
  4. Targeted Use Only

    • Limited to specific individuals
    • Time/geographic limits
    • Minimize false positives

What’s NOT Prohibited

  • Post-event facial recognition (non-real-time) for investigating past crimes
  • Border control biometrics (not “publicly accessible” under directive definition)
  • Airport security checks (controlled access, not public space)
  • Private premises (owner’s property, not “publicly accessible”)

Penalties [Art 99]

ViolationFine
Article 5 breach (prohibited practices)Up to €35,000,000 OR 7% of total worldwide annual turnover (whichever higher)
Intentional violationsHigher end of penalty range
Repeated violationsAggravating factor

No de minimis exception: Applies to organizations of all sizes.

Enforcement Timeline

DateMilestone
August 1, 2024AI Act enters into force
February 2, 2025Article 5 prohibitions become enforceable
February 4-6, 2025European Commission publishes guidelines on prohibited practices
August 2, 2026Full AI Act enforcement (high-risk systems, conformity assessment, etc.)

Compliance Checklist

Organizations should:

  • Audit all AI systems for Article 5 prohibited practices
  • Cease use of any prohibited systems immediately
  • Review emotion recognition in workplace/education (unless medical/safety)
  • Verify biometric systems don’t categorize by protected characteristics
  • Check facial recognition doesn’t scrape internet/CCTV indiscriminately
  • Ensure no social scoring across unrelated contexts
  • Verify predictive policing uses objective facts, not profiling alone
  • Eliminate manipulation/deception techniques in AI interactions
  • Document safeguards for vulnerable group interactions
  • If law enforcement: ensure real-time RBI has judicial authorization + FRIA

Interaction with Other Laws

LawOverlap with Article 5
GDPRProhibited biometric processing reinforces GDPR Art 9 (special category data)
DSAManipulative dark patterns also regulated under Digital Services Act
Charter of Fundamental RightsArticle 5 implements Charter protections (dignity, non-discrimination, privacy)
National criminal lawLaw enforcement exceptions subject to national procedural safeguards

Gray Areas and Guidance

Is my system prohibited? Decision tree:

  1. Is it one of the 8 categories?

    • No → Not prohibited under Art 5 (may be high-risk under Title III)
    • Yes → Continue to step 2
  2. Does an exception apply?

    • Medical/safety (emotion recognition)
    • Law enforcement (biometric categorization, real-time RBI with safeguards)
    • Human-supported assessment with objective facts (predictive policing)
  3. If exception applies, are safeguards in place?

    • Judicial authorization (RBI)
    • Fundamental rights assessment
    • Database registration
    • Proportionality review
  4. If no exception OR safeguards missing:

    • PROHIBITED — cease use immediately

Common Misconceptions

MythReality
”Small companies exempt from Article 5”NO — applies to all operators placing AI on EU market
”Can make prohibited AI compliant with safeguards”NO — no risk mitigation possible for prohibited practices
”Only applies to high-risk AI systems”NO — prohibitions apply regardless of risk classification
”Post-event facial recognition also banned”NO — only real-time RBI in public spaces prohibited (with exceptions)
“Can use emotion AI if employees consent”NO — workplace emotion recognition prohibited even with consent (unless safety)

Citation

Article 5 — Prohibited AI Practices, Regulation (EU) 2024/1689

Related:

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt