AI Act: General Provisions
General Provisions [Articles 1-4]
Rule: The AI Act establishes harmonized rules for placing AI systems on the EU market and putting them into service, covering subject matter, scope, key definitions, and AI literacy requirements for providers and deployers.
Subject Matter [Article 1]
Article 1(1): Harmonized Rules
This Regulation lays down:
| Area | Coverage |
|---|---|
| Harmonized rules | For placing on market, putting into service, use of AI systems |
| Prohibited practices | AI practices with unacceptable risk |
| High-risk requirements | Specific requirements for high-risk AI systems |
| Transparency obligations | For certain AI systems and GPAI models |
| Market surveillance | Rules for monitoring and enforcement |
| Governance structure | AI Office, national authorities, advisory bodies |
Article 1(2): Objectives
The Regulation aims to:
- Protect fundamental rights - Ensure AI respects EU values, rights, freedoms
- Single market - Enable free movement of AI systems across EU
- Legal certainty - Clear rules for providers and deployers
- Innovation - Foster trustworthy AI development
- Governance - Effective enforcement and cooperation
Balancing act:
- Safety and rights protection
- Innovation and competitiveness
- Risk-based, proportionate regulation
Scope [Article 2]
Article 2(1): Territorial Scope
AI Act applies to:
| Situation | Applicability |
|---|---|
| Providers in EU | Placing on market or putting into service AI systems in EU |
| Providers outside EU | AI systems where output used in EU |
| Deployers in EU | Using AI systems located in EU |
| Importers/distributors | Making AI systems available on EU market |
| Product manufacturers | Placing on market products with AI systems under their name |
| Authorized representatives | Acting on behalf of non-EU providers |
“In the EU” means:
- Output of AI system used in EU territory
- Regardless of where provider/deployer established
- Extra-territorial reach for non-EU entities
Examples:
| Scenario | Covered? |
|---|---|
| US company provides facial recognition used by EU police | ✅ Yes (output used in EU) |
| UK bank uses AI credit scoring for EU customers | ✅ Yes (deployer in EU) |
| Chinese manufacturer exports AI-powered medical device to EU | ✅ Yes (imported to EU market) |
| EU startup trains AI model but only deploys in Asia | ❌ No (no EU use) |
Article 2(2): Scope for EU Institutions
AI Act applies to EU institutions, bodies, offices and agencies when acting as:
- Providers
- Deployers
- Authorized representatives
No exemptions for public sector AI use.
Article 2(3): Scope for Third Countries
AI Act applies to providers and deployers in third countries where AI system output used in EU.
Effect: Non-EU companies must comply if their AI affects EU.
Article 2(4): Exclusions
AI Act does NOT apply to:
| Exclusion | Reason |
|---|---|
| Military, defense, national security AI | Covered by other frameworks |
| AI for scientific research only | Excluding putting into service/market |
| Personal non-professional use | Private use by individuals |
Important limitations:
Military exclusion applies ONLY when:
- Exclusively for military purposes
- National security purposes
- Defense activities
Does NOT exclude:
- Dual-use AI (civilian + military)
- AI for law enforcement
- AI for border control
- AI for public administration
Article 2(5): Member State Law
AI Act does not affect Member State powers concerning:
- National security
- Defense
- Military activities
But: Member States must respect fundamental rights.
Article 2(6): Relationship with Other EU Law
AI Act applies without prejudice to:
| Law | Relationship |
|---|---|
| GDPR | Data protection rules apply concurrently |
| Product Safety | Medical devices, machinery, toys - sector rules apply |
| Aviation | Aviation safety regulations take precedence in specific areas |
| Financial Services | Prudential requirements remain |
| Digital Services Act | Platform obligations continue |
Coordination principle:
- AI Act provides horizontal framework
- Sector-specific rules apply where they exist
- Both frameworks apply - must comply with all
Definitions [Article 3]
Core Definitions
AI System [Article 3(1)]
A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Key elements:
- Machine-based (not purely human)
- Operates with autonomy
- May adapt after deployment
- Infers how to generate outputs
- Outputs can influence environments
Examples of AI systems:
| System | AI System? | Reason |
|---|---|---|
| Machine learning model for fraud detection | ✅ Yes | Infers patterns, makes predictions |
| Rule-based expert system | ✅ Yes | Generates decisions based on rules |
| Traditional software with fixed logic | ❌ No | No inference, no autonomy |
| Statistical model (regression) | ⚠️ Maybe | Depends on autonomy and inference capability |
| Generative AI (ChatGPT, DALL-E) | ✅ Yes | Generates content, exhibits autonomy |
Provider [Article 3(3)]
Natural or legal person that develops an AI system or GPAI model or has it developed, and places it on the market or puts it into service under its own name or trademark.
Provider responsibilities:
- Compliance with AI Act requirements
- CE marking (for high-risk systems)
- Declaration of conformity
- Post-market monitoring
- Documentation and transparency
Who is a provider?
| Scenario | Provider? |
|---|---|
| Company develops AI internally for own use | ✅ Yes (if puts into service) |
| Company commissions third-party to develop AI | ✅ Yes (company is provider) |
| Open-source community develops AI model | ⚠️ Complex (first commercial user may be provider) |
| Company fine-tunes existing AI model | ⚠️ Maybe (if substantial modification) |
Deployer [Article 3(4)]
Natural or legal person that uses an AI system under its authority except where AI system used in course of personal non-professional activity.
Deployer responsibilities:
- Follow instructions for use
- Human oversight
- Input data quality
- Monitor for incidents
- Cooperate with authorities
Examples:
| Entity | Deployer? |
|---|---|
| Hospital using AI diagnostic tool | ✅ Yes |
| Employer using AI resume screening | ✅ Yes |
| Individual using ChatGPT for work | ✅ Yes (professional activity) |
| Individual using ChatGPT at home | ❌ No (personal use) |
| Police using facial recognition | ✅ Yes |
Placing on the Market [Article 3(9)]
First making available an AI system or GPAI model on EU market.
Key points:
- First time available in EU
- Triggers provider obligations
- One-time event per system
Putting into Service [Article 3(11)]
Supply of an AI system for first use directly to deployer or for own use in EU.
Difference from placing on market:
- Putting into service: first use
- Placing on market: first availability
Examples:
| Action | Placing on Market? | Putting into Service? |
|---|---|---|
| Selling AI software to customers | ✅ Yes | ❌ No (customers put into service) |
| Deploying AI internally in company | ❌ No (not made available to others) | ✅ Yes |
| Distributing open-source AI | ⚠️ Complex | ⚠️ Complex |
High-Risk AI System [Article 3(2)]
AI system listed in Annex III (specific use cases) or AI system that is safety component of product covered by Union harmonization legislation.
Two paths to high-risk:
-
Annex III listing (use case based):
- Biometric identification
- Critical infrastructure management
- Education and vocational training
- Employment, worker management
- Access to essential services
- Law enforcement
- Migration, asylum, border control
- Administration of justice
-
Safety component (product based):
- Medical devices
- Machinery
- Civil aviation
- Motor vehicles
- Marine equipment
Consequences of high-risk classification:
- Full requirements of Title III apply
- Conformity assessment required
- CE marking mandatory
- EU database registration
- Post-market monitoring
General-Purpose AI Model (GPAI) [Article 3(63)]
AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications.
Characteristics:
- Not designed for single specific task
- Can perform multiple distinct tasks
- Adaptable to various applications
- Examples: GPT-4, Claude, LLaMA
Does NOT include:
- AI systems for single specific task
- Models designed for one narrow application
- Traditional machine learning models
Systemic Risk [Article 3(65)]
Risk specific to high-impact capabilities of GPAI models that have significant impact on EU market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, society, environment.
Triggers additional obligations for GPAI models with systemic risk.
Other Key Definitions
| Term | Definition [Article] |
|---|---|
| Importer | Person established in EU placing on market AI system bearing non-EU provider name [3(16)] |
| Distributor | Person in supply chain other than provider/importer making AI system available [3(17)] |
| Operator | Provider, deployer, authorized representative, importer, distributor [3(18)] |
| Authorized representative | Person established in EU with written mandate from non-EU provider [3(15)] |
| Conformity assessment | Process demonstrating AI system meets requirements [3(37)] |
| CE marking | Marking indicating AI system conforms to requirements [3(43)] |
| Post-market monitoring | Activities by providers to collect and review experience with AI systems [3(40)] |
| Market surveillance | Activities by authorities to check AI systems comply with AI Act [3(41)] |
| Recall | Measure aimed at achieving return of AI system to provider/authorized representative [3(42)] |
| Withdrawal | Measure aimed at preventing AI system being made available on market [3(43)] |
AI Literacy [Article 4]
Article 4(1): Obligation for Providers and Deployers
Providers and deployers shall take measures to ensure sufficient level of AI literacy of their staff and other persons dealing with operation and use of AI systems on their behalf.
“AI literacy” means:
- Understanding AI capabilities and limitations
- Awareness of AI risks
- Knowledge of how to use AI responsibly
- Ability to interpret AI outputs
- Understanding of human oversight requirements
Article 4(2): Proportionate Measures
Measures shall take into account:
| Factor | Consideration |
|---|---|
| Technical knowledge | Existing expertise of staff |
| Experience | Level of familiarity with AI |
| Education | Educational background |
| Training | Previous AI training received |
| Context | Nature of AI system being used |
| Persons affected | Who interacts with AI system |
Practical measures:
| Staff Level | Appropriate AI Literacy Measures |
|---|---|
| Executives | Strategic understanding of AI risks and benefits, decision-making oversight |
| Technical staff | Deep technical training on AI system operation, monitoring, troubleshooting |
| Operational staff | Practical training on using AI tools, interpreting outputs, escalation procedures |
| Oversight staff | Understanding AI limitations, bias detection, human override procedures |
| Customer-facing staff | How to explain AI decisions to affected persons, handling complaints |
Article 4(3): Commission Guidelines
Commission may adopt guidelines on practical implementation of AI literacy provisions.
Expected to cover:
- Training curricula
- Competency frameworks
- Sector-specific guidance
- Assessment methods
Practical Compliance
Determining Applicability
Checklist for whether AI Act applies:
-
✅ Is it an AI system under Article 3(1)?
- Machine-based with autonomy?
- Infers how to generate outputs?
- Influences environments?
-
✅ Is it within scope under Article 2?
- Used in EU or output used in EU?
- Not exclusively military/defense/national security?
- Not for personal non-professional use?
-
✅ What is your role under Article 3?
- Provider (develop and place on market)?
- Deployer (use under your authority)?
- Importer/distributor/authorized rep?
-
✅ What risk category under Articles 5-6?
- Prohibited practice (Article 5)?
- High-risk (Annex III or safety component)?
- Transparency obligation (Articles 50-53)?
- General-purpose AI model?
Implementing AI Literacy (Article 4)
Steps for compliance:
-
✅ Assess current AI literacy levels
- Survey staff knowledge
- Identify gaps
- Prioritize based on roles
-
✅ Develop training program
- General AI awareness for all staff
- Role-specific technical training
- Ongoing refresher training
-
✅ Document AI literacy measures
- Training materials and curricula
- Attendance records
- Competency assessments
- Update frequency
-
✅ Tailor by role and system
- Executives: strategic oversight
- Technical: deep system knowledge
- Operational: practical use
- Customer-facing: explanation skills
-
✅ Review and update regularly
- As AI systems evolve
- As roles change
- As regulations develop
- Annual minimum
Common Mistakes
Scope interpretation:
- Assuming AI Act only applies to EU companies (applies to anyone whose AI is used in EU)
- Thinking personal use exemption is broad (only non-professional personal use exempt)
- Believing military exclusion is broad (only exclusive military/defense purposes)
Definition issues:
- Treating all software as AI systems (must meet Article 3(1) definition)
- Confusing provider and deployer roles (provider develops/places; deployer uses)
- Not recognizing when substantial modification makes you a provider
AI literacy:
- Generic training for all staff (must be tailored to roles and systems)
- One-time training (must be ongoing and updated)
- No documentation (must document measures taken)
- Ignoring proportionality (measures must fit context)
Citation
Sources
Related
- AI Act prohibited practices (Art 5)
- AI Act risk classification (Arts 6-7)
- AI Act high-risk requirements (Arts 8-15)
- AI Act enforcement (Arts 94-101)
- Back to AI Act overview