EU

EU AI Act: Risk Classification & Prohibited Practices

Risk Classification [Art 5]

Tier 1: Unacceptable Risk (Prohibited) [Art 5]

These AI uses are banned:

Prohibited PracticeExampleCitation
Social scoringGovernment scoring citizens for social behaviourArt 5(1)(c)
Subliminal manipulationAI that exploits vulnerabilities to cause harmArt 5(1)(a)
Real-time biometric identificationLive facial recognition in public (with exceptions)Art 5(1)(h)
Emotion recognition at work/schoolAI detecting emotions of employees/studentsArt 5(1)(f)
Predictive policing (individuals)Predicting individual crime risk from profilingArt 5(1)(d)
Untargeted facial scrapingBuilding face databases from internet/CCTVArt 5(1)(e)

Effective: February 2025

Tier 2: High Risk [Art 6, Annex III]

Must comply with strict requirements if AI used in:

DomainExamplesCitation
BiometricsFacial recognition, emotion detectionAnnex III(1)
Critical infrastructureEnergy, water, transport systemsAnnex III(2)
EducationAdmissions, grading, proctoringAnnex III(3)
EmploymentCV screening, performance monitoringAnnex III(4)
Essential servicesCredit scoring, insurance pricingAnnex III(5)
Law enforcementRisk assessment, evidence analysisAnnex III(6)
Migration/asylumBorder control, visa processingAnnex III(7)
JusticeLegal research influencing outcomesAnnex III(8)

Tier 3: Limited Risk [Art 50]

Transparency obligations only:

  • Chatbots must disclose they are AI
  • AI-generated content must be labelled
  • Emotion recognition systems must inform users
  • Deepfakes must be disclosed

Tier 4: Minimal Risk

No specific requirements — most AI systems fall here (spam filters, games, etc.)

Source Text (Article 5 - Prohibited Practices)

  1. The following AI practices shall be prohibited:

(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of, materially distorting the behaviour of a person…

(c) the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment… (ii) detrimental or unfavourable treatment… that is unjustified or disproportionate…

(h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary [for specific listed purposes]…

Citation

Article 5, EU AI Act (Regulation 2024/1689)

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt