EU

EU AI Act: Scope, Definitions and AI Literacy

Scope, Definitions and AI Literacy [Art 1-4]

Rule: The AI Act applies to providers, deployers, and users of AI systems in the EU with extraterritorial reach. Key definitions establish who is regulated and what qualifies as an AI system. Organizations must ensure AI literacy among staff.

Article 1: Subject Matter

The Regulation establishes:

  1. Harmonized rules for placing AI systems on EU market, putting into service, and use
  2. Prohibitions of certain AI practices (Article 5)
  3. Requirements for high-risk AI systems and operator obligations
  4. Transparency rules for certain AI systems
  5. Governance rules on market monitoring, surveillance, and enforcement

Objective: Ensure AI systems placed on EU market are safe and respect fundamental rights.

Article 2: Scope

2.1 — Territorial Application

The AI Act applies to:

Entity LocationAI System LocationOutput Used InApplies?
In EUIn EUEU✅ Yes
Outside EUIn EUEU✅ Yes
Outside EUOutside EUOutput used in EU✅ Yes (extraterritorial)
In EUAnyOutside EU only❌ No (unless output returns to EU)

Key principle: If AI output is used in the EU, the Act applies regardless of where provider/deployer/system is located.

Examples of Extraterritorial Application

ScenarioAI Act Applies?
US company deploys chatbot used by EU customers✅ Yes
Chinese facial recognition system used in EU airports✅ Yes
EU company uses US cloud AI for internal operations in EU✅ Yes (both provider and deployer obligations)
UK provider sells AI system to EU customer✅ Yes (provider obligations)
Swiss AI used only in Switzerland❌ No

2.2 — Material Scope (What’s Covered)

The AI Act regulates:

  • AI systems (see Article 3 definition)
  • General-purpose AI models (GPAI)
  • Providers, deployers, importers, distributors
  • AI-generated content

2.3 — Exclusions and Exceptions

Complete Exclusions (AI Act does not apply):

Excluded AreaBasisDetails
Military, defense, national securityArt 2(3)AI used exclusively for these purposes
Research & developmentArt 2(6)AI systems used only for R&D before market placement
Personal non-professional useArt 2(1)Individuals using AI for personal purposes (e.g., consumer apps)
Free and open-source AIArt 2(8)Unless placed on market as high-risk system or GPAI with systemic risk

Partial Exclusions:

AreaWhat’s ExcludedWhat Still Applies
Law enforcementSpecific safeguards in national lawProhibited practices (Art 5), fundamental rights
Migration/asylumSpecific safeguards applyTransparency, human oversight for certain systems
National securityMember State competenceOnly if exclusively national security purpose

Military/Defense Exception — Key Limitations

“Exclusively” means:

  • AI system ONLY used for military/defense/national security
  • No dual-use (civilian + military)
  • No later repurposing for civilian use

Examples:

SystemExcluded?
Military drone targeting system (military only)✅ Yes
Dual-use AI (military + humanitarian missions)❌ No (civilian use triggers AI Act)
Cybersecurity AI for critical infrastructure❌ No (civilian protection, not national security)
Intelligence agency facial recognition (national security only)✅ Yes (if exclusively national security)

Gray area: “National security” undefined in EU law — Member States interpret differently.

Article 3: Definitions

3.1 — AI System

Definition:

“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Key characteristics:

  1. Machine-based (software or embedded in hardware)
  2. Autonomy (varies from minimal to full)
  3. Adaptiveness (may learn/change after deployment, but not required)
  4. Inference (generates outputs from inputs)
  5. Influence (affects physical or virtual environments)

What qualifies as AI system:

TechnologyAI System?
Machine learning models (supervised, unsupervised, reinforcement)✅ Yes
Neural networks, deep learning✅ Yes
Large language models (ChatGPT, Claude, etc.)✅ Yes
Computer vision, facial recognition✅ Yes
Recommendation algorithms✅ Yes
Expert systems, rule-based AI✅ Yes (if meets definition)
Simple rule-based automation (if-then, no inference)❌ Likely no
Statistical analysis tools (descriptive only)❌ Likely no
Traditional software (deterministic, no inference)❌ No

3.2 — Provider

Definition:

“A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.”

Who is a provider:

  • Develops AI system themselves
  • Has AI system developed by third party but places on market under own name
  • Open-source developers (if system becomes high-risk or GPAI with systemic risk)

Provider examples:

EntityProvider?
OpenAI (develops and markets GPT-4)✅ Yes
Company that white-labels third-party AI✅ Yes (if places on market under own name)
Internal team building AI for own company’s use❌ No (deployer, not provider)
Contract developer building AI for client❌ No (client is provider if they place on market)

Key obligations: Conformity assessment, technical documentation, CE marking (for high-risk).

3.3 — Deployer

Definition:

“Any natural or legal person, public authority, agency or other body using an AI system under its own authority, except where the AI system is used in the course of a personal non-professional activity.”

Who is a deployer:

  • Uses AI system under their authority
  • Includes internal use (no market placement)
  • Employees using AI under company authority

Deployer examples:

EntityDeployer?
Hospital using AI diagnostic tool✅ Yes
Company using ChatGPT Enterprise for customer service✅ Yes
Bank using credit scoring AI✅ Yes
Individual using ChatGPT for personal tasks❌ No (personal non-professional use)
Employee using company-provided AI tools❌ No (employer is deployer)

Key obligations: Human oversight, monitoring, record-keeping (for high-risk systems).

3.4 — General-Purpose AI Model (GPAI)

Definition:

“An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market.”

Characteristics:

  • Trained on large datasets
  • Self-supervision (unsupervised or semi-supervised learning)
  • Generality: Can perform diverse tasks
  • Not task-specific

GPAI examples:

ModelGPAI?
GPT-4, Claude, Gemini✅ Yes
DALL-E, Stable Diffusion (text-to-image)✅ Yes
Mistral, Llama (open-source LLMs)✅ Yes
Specialized medical diagnosis AI (single task)❌ No
Task-specific recommendation engine❌ Likely no

Additional category: GPAI with systemic risk (>10^25 FLOPs training compute) has stricter obligations.

3.5 — Other Key Definitions

TermDefinition
Placing on the marketFirst making AI system available on EU market
Putting into serviceSupply of AI system for first use directly to deployer or for own use in EU
Intended purposeUse for which AI system is intended by provider, including context and conditions specified in documentation
Reasonably foreseeable misuseUse of AI system in way not intended by provider but which may result from reasonably foreseeable human behavior or system interaction
High-risk AI systemAI systems listed in Annex III or qualifying under Article 6
Serious incidentIncident that directly or indirectly leads to death, serious health damage, or serious disruption of critical infrastructure
Post-market monitoringAll activities carried out by providers to collect and review experience from use of AI systems

Article 4: AI Literacy

Requirement:

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.”

4.1 — Who Must Ensure AI Literacy

EntityObligation
ProvidersTrain staff developing, testing, deploying AI systems
DeployersTrain staff operating, monitoring, using AI systems
BothExtend to contractors, consultants, third parties acting on their behalf

4.2 — What is AI Literacy?

Definition:

“Skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”

Core components:

AreaWhat Staff Should Understand
Technical fundamentalsHow AI works, limitations, failure modes
AI Act obligationsLegal requirements, prohibited practices, high-risk rules
Risks and harmsBias, discrimination, manipulation, safety risks
Fundamental rightsPrivacy, non-discrimination, dignity, transparency
Operational proceduresHow to monitor, log, report incidents, human oversight
Context-specificRisks specific to deployment context (healthcare, law enforcement, etc.)

4.3 — Level of Literacy Required

“Sufficient level” depends on:

  1. Role: Developers need deeper technical knowledge than end-users
  2. Risk level: High-risk systems require higher AI literacy
  3. Context: Healthcare AI requires medical context knowledge
  4. Affected persons: Understanding impact on vulnerable groups

4.4 — Implementation Guidance

Training should cover:

AudienceTraining Topics
DevelopersTechnical design, bias mitigation, testing, documentation, AI Act technical requirements
Product managersIntended purpose, risk assessment, conformity assessment, labeling, instructions for use
Operators/usersHow to use AI safely, when to override, logging, incident detection
Compliance officersFull AI Act requirements, enforcement, penalties, documentation
Senior managementStrategic risks, governance, accountability, cultural change

Practical measures:

  • Onboarding training for new hires
  • Annual refresher courses
  • Role-specific training modules
  • Certification or competency assessments
  • Documented training records
  • Regular updates as AI Act guidance evolves

4.5 — Timeline

Effective: February 2, 2025

Organizations should have AI literacy programs in place for all staff dealing with AI systems.

4.6 — Penalties

No specific penalty for AI literacy non-compliance, but failure to ensure AI literacy may:

  • Contribute to other violations (e.g., improper deployment of high-risk systems)
  • Demonstrate lack of due diligence in governance
  • Factor into penalty calculations under Article 99

Interaction Between Roles

ScenarioProvider RoleDeployer Role
Company develops AI and sells to customers✅ Provider❌ Not deployer (unless also uses internally)
Company buys third-party AI for internal use❌ Not provider✅ Deployer
Company modifies third-party AI substantially and deploys✅ Provider (substantial modification)✅ Deployer
Cloud AI service (SaaS)✅ ProviderCustomers are deployers
Open-source AI used by companyOriginal developer may be provider✅ Deployer

Dual role: Many organizations are both providers (for some AI) and deployers (for others).

Compliance Checklist (Articles 1-4)

Organizations should:

  • Determine scope applicability:

    • Does your AI output get used in EU?
    • Is it purely military/defense/national security use?
    • Is it R&D only (pre-market)?
  • Identify your role:

    • Provider (develop/place on market)?
    • Deployer (use AI under your authority)?
    • Both?
  • Classify your AI systems:

    • Meets AI system definition?
    • General-purpose AI model?
    • High-risk (Annex III)?
    • Prohibited practice (Article 5)?
  • Implement AI literacy program:

    • Training for developers, users, management
    • Role-specific curricula
    • Regular updates
    • Document training completion

Citation

Articles 1-4 — Subject Matter, Scope, Definitions, AI Literacy, Regulation (EU) 2024/1689

Related:

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt