EU AI Act: Scope, Definitions and AI Literacy
Scope, Definitions and AI Literacy [Art 1-4]
Rule: The AI Act applies to providers, deployers, and users of AI systems in the EU with extraterritorial reach. Key definitions establish who is regulated and what qualifies as an AI system. Organizations must ensure AI literacy among staff.
Article 1: Subject Matter
The Regulation establishes:
- Harmonized rules for placing AI systems on EU market, putting into service, and use
- Prohibitions of certain AI practices (Article 5)
- Requirements for high-risk AI systems and operator obligations
- Transparency rules for certain AI systems
- Governance rules on market monitoring, surveillance, and enforcement
Objective: Ensure AI systems placed on EU market are safe and respect fundamental rights.
Article 2: Scope
2.1 — Territorial Application
The AI Act applies to:
| Entity Location | AI System Location | Output Used In | Applies? |
|---|---|---|---|
| In EU | In EU | EU | ✅ Yes |
| Outside EU | In EU | EU | ✅ Yes |
| Outside EU | Outside EU | Output used in EU | ✅ Yes (extraterritorial) |
| In EU | Any | Outside EU only | ❌ No (unless output returns to EU) |
Key principle: If AI output is used in the EU, the Act applies regardless of where provider/deployer/system is located.
Examples of Extraterritorial Application
| Scenario | AI Act Applies? |
|---|---|
| US company deploys chatbot used by EU customers | ✅ Yes |
| Chinese facial recognition system used in EU airports | ✅ Yes |
| EU company uses US cloud AI for internal operations in EU | ✅ Yes (both provider and deployer obligations) |
| UK provider sells AI system to EU customer | ✅ Yes (provider obligations) |
| Swiss AI used only in Switzerland | ❌ No |
2.2 — Material Scope (What’s Covered)
The AI Act regulates:
- AI systems (see Article 3 definition)
- General-purpose AI models (GPAI)
- Providers, deployers, importers, distributors
- AI-generated content
2.3 — Exclusions and Exceptions
Complete Exclusions (AI Act does not apply):
| Excluded Area | Basis | Details |
|---|---|---|
| Military, defense, national security | Art 2(3) | AI used exclusively for these purposes |
| Research & development | Art 2(6) | AI systems used only for R&D before market placement |
| Personal non-professional use | Art 2(1) | Individuals using AI for personal purposes (e.g., consumer apps) |
| Free and open-source AI | Art 2(8) | Unless placed on market as high-risk system or GPAI with systemic risk |
Partial Exclusions:
| Area | What’s Excluded | What Still Applies |
|---|---|---|
| Law enforcement | Specific safeguards in national law | Prohibited practices (Art 5), fundamental rights |
| Migration/asylum | Specific safeguards apply | Transparency, human oversight for certain systems |
| National security | Member State competence | Only if exclusively national security purpose |
Military/Defense Exception — Key Limitations
“Exclusively” means:
- AI system ONLY used for military/defense/national security
- No dual-use (civilian + military)
- No later repurposing for civilian use
Examples:
| System | Excluded? |
|---|---|
| Military drone targeting system (military only) | ✅ Yes |
| Dual-use AI (military + humanitarian missions) | ❌ No (civilian use triggers AI Act) |
| Cybersecurity AI for critical infrastructure | ❌ No (civilian protection, not national security) |
| Intelligence agency facial recognition (national security only) | ✅ Yes (if exclusively national security) |
Gray area: “National security” undefined in EU law — Member States interpret differently.
Article 3: Definitions
3.1 — AI System
Definition:
“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
Key characteristics:
- Machine-based (software or embedded in hardware)
- Autonomy (varies from minimal to full)
- Adaptiveness (may learn/change after deployment, but not required)
- Inference (generates outputs from inputs)
- Influence (affects physical or virtual environments)
What qualifies as AI system:
| Technology | AI System? |
|---|---|
| Machine learning models (supervised, unsupervised, reinforcement) | ✅ Yes |
| Neural networks, deep learning | ✅ Yes |
| Large language models (ChatGPT, Claude, etc.) | ✅ Yes |
| Computer vision, facial recognition | ✅ Yes |
| Recommendation algorithms | ✅ Yes |
| Expert systems, rule-based AI | ✅ Yes (if meets definition) |
| Simple rule-based automation (if-then, no inference) | ❌ Likely no |
| Statistical analysis tools (descriptive only) | ❌ Likely no |
| Traditional software (deterministic, no inference) | ❌ No |
3.2 — Provider
Definition:
“A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.”
Who is a provider:
- Develops AI system themselves
- Has AI system developed by third party but places on market under own name
- Open-source developers (if system becomes high-risk or GPAI with systemic risk)
Provider examples:
| Entity | Provider? |
|---|---|
| OpenAI (develops and markets GPT-4) | ✅ Yes |
| Company that white-labels third-party AI | ✅ Yes (if places on market under own name) |
| Internal team building AI for own company’s use | ❌ No (deployer, not provider) |
| Contract developer building AI for client | ❌ No (client is provider if they place on market) |
Key obligations: Conformity assessment, technical documentation, CE marking (for high-risk).
3.3 — Deployer
Definition:
“Any natural or legal person, public authority, agency or other body using an AI system under its own authority, except where the AI system is used in the course of a personal non-professional activity.”
Who is a deployer:
- Uses AI system under their authority
- Includes internal use (no market placement)
- Employees using AI under company authority
Deployer examples:
| Entity | Deployer? |
|---|---|
| Hospital using AI diagnostic tool | ✅ Yes |
| Company using ChatGPT Enterprise for customer service | ✅ Yes |
| Bank using credit scoring AI | ✅ Yes |
| Individual using ChatGPT for personal tasks | ❌ No (personal non-professional use) |
| Employee using company-provided AI tools | ❌ No (employer is deployer) |
Key obligations: Human oversight, monitoring, record-keeping (for high-risk systems).
3.4 — General-Purpose AI Model (GPAI)
Definition:
“An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market.”
Characteristics:
- Trained on large datasets
- Self-supervision (unsupervised or semi-supervised learning)
- Generality: Can perform diverse tasks
- Not task-specific
GPAI examples:
| Model | GPAI? |
|---|---|
| GPT-4, Claude, Gemini | ✅ Yes |
| DALL-E, Stable Diffusion (text-to-image) | ✅ Yes |
| Mistral, Llama (open-source LLMs) | ✅ Yes |
| Specialized medical diagnosis AI (single task) | ❌ No |
| Task-specific recommendation engine | ❌ Likely no |
Additional category: GPAI with systemic risk (>10^25 FLOPs training compute) has stricter obligations.
3.5 — Other Key Definitions
| Term | Definition |
|---|---|
| Placing on the market | First making AI system available on EU market |
| Putting into service | Supply of AI system for first use directly to deployer or for own use in EU |
| Intended purpose | Use for which AI system is intended by provider, including context and conditions specified in documentation |
| Reasonably foreseeable misuse | Use of AI system in way not intended by provider but which may result from reasonably foreseeable human behavior or system interaction |
| High-risk AI system | AI systems listed in Annex III or qualifying under Article 6 |
| Serious incident | Incident that directly or indirectly leads to death, serious health damage, or serious disruption of critical infrastructure |
| Post-market monitoring | All activities carried out by providers to collect and review experience from use of AI systems |
Article 4: AI Literacy
Requirement:
“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.”
4.1 — Who Must Ensure AI Literacy
| Entity | Obligation |
|---|---|
| Providers | Train staff developing, testing, deploying AI systems |
| Deployers | Train staff operating, monitoring, using AI systems |
| Both | Extend to contractors, consultants, third parties acting on their behalf |
4.2 — What is AI Literacy?
Definition:
“Skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”
Core components:
| Area | What Staff Should Understand |
|---|---|
| Technical fundamentals | How AI works, limitations, failure modes |
| AI Act obligations | Legal requirements, prohibited practices, high-risk rules |
| Risks and harms | Bias, discrimination, manipulation, safety risks |
| Fundamental rights | Privacy, non-discrimination, dignity, transparency |
| Operational procedures | How to monitor, log, report incidents, human oversight |
| Context-specific | Risks specific to deployment context (healthcare, law enforcement, etc.) |
4.3 — Level of Literacy Required
“Sufficient level” depends on:
- Role: Developers need deeper technical knowledge than end-users
- Risk level: High-risk systems require higher AI literacy
- Context: Healthcare AI requires medical context knowledge
- Affected persons: Understanding impact on vulnerable groups
4.4 — Implementation Guidance
Training should cover:
| Audience | Training Topics |
|---|---|
| Developers | Technical design, bias mitigation, testing, documentation, AI Act technical requirements |
| Product managers | Intended purpose, risk assessment, conformity assessment, labeling, instructions for use |
| Operators/users | How to use AI safely, when to override, logging, incident detection |
| Compliance officers | Full AI Act requirements, enforcement, penalties, documentation |
| Senior management | Strategic risks, governance, accountability, cultural change |
Practical measures:
- Onboarding training for new hires
- Annual refresher courses
- Role-specific training modules
- Certification or competency assessments
- Documented training records
- Regular updates as AI Act guidance evolves
4.5 — Timeline
Effective: February 2, 2025
Organizations should have AI literacy programs in place for all staff dealing with AI systems.
4.6 — Penalties
No specific penalty for AI literacy non-compliance, but failure to ensure AI literacy may:
- Contribute to other violations (e.g., improper deployment of high-risk systems)
- Demonstrate lack of due diligence in governance
- Factor into penalty calculations under Article 99
Interaction Between Roles
| Scenario | Provider Role | Deployer Role |
|---|---|---|
| Company develops AI and sells to customers | ✅ Provider | ❌ Not deployer (unless also uses internally) |
| Company buys third-party AI for internal use | ❌ Not provider | ✅ Deployer |
| Company modifies third-party AI substantially and deploys | ✅ Provider (substantial modification) | ✅ Deployer |
| Cloud AI service (SaaS) | ✅ Provider | Customers are deployers |
| Open-source AI used by company | Original developer may be provider | ✅ Deployer |
Dual role: Many organizations are both providers (for some AI) and deployers (for others).
Compliance Checklist (Articles 1-4)
Organizations should:
-
Determine scope applicability:
- Does your AI output get used in EU?
- Is it purely military/defense/national security use?
- Is it R&D only (pre-market)?
-
Identify your role:
- Provider (develop/place on market)?
- Deployer (use AI under your authority)?
- Both?
-
Classify your AI systems:
- Meets AI system definition?
- General-purpose AI model?
- High-risk (Annex III)?
- Prohibited practice (Article 5)?
-
Implement AI literacy program:
- Training for developers, users, management
- Role-specific curricula
- Regular updates
- Document training completion
Citation
Articles 1-4 — Subject Matter, Scope, Definitions, AI Literacy, Regulation (EU) 2024/1689
Related: