EU

AI Act: General Provisions

General Provisions [Articles 1-4]

Rule: The AI Act establishes harmonized rules for placing AI systems on the EU market and putting them into service, covering subject matter, scope, key definitions, and AI literacy requirements for providers and deployers.

Subject Matter [Article 1]

Article 1(1): Harmonized Rules

This Regulation lays down:

AreaCoverage
Harmonized rulesFor placing on market, putting into service, use of AI systems
Prohibited practicesAI practices with unacceptable risk
High-risk requirementsSpecific requirements for high-risk AI systems
Transparency obligationsFor certain AI systems and GPAI models
Market surveillanceRules for monitoring and enforcement
Governance structureAI Office, national authorities, advisory bodies

Article 1(2): Objectives

The Regulation aims to:

  1. Protect fundamental rights - Ensure AI respects EU values, rights, freedoms
  2. Single market - Enable free movement of AI systems across EU
  3. Legal certainty - Clear rules for providers and deployers
  4. Innovation - Foster trustworthy AI development
  5. Governance - Effective enforcement and cooperation

Balancing act:

  • Safety and rights protection
  • Innovation and competitiveness
  • Risk-based, proportionate regulation

Scope [Article 2]

Article 2(1): Territorial Scope

AI Act applies to:

SituationApplicability
Providers in EUPlacing on market or putting into service AI systems in EU
Providers outside EUAI systems where output used in EU
Deployers in EUUsing AI systems located in EU
Importers/distributorsMaking AI systems available on EU market
Product manufacturersPlacing on market products with AI systems under their name
Authorized representativesActing on behalf of non-EU providers

“In the EU” means:

  • Output of AI system used in EU territory
  • Regardless of where provider/deployer established
  • Extra-territorial reach for non-EU entities

Examples:

ScenarioCovered?
US company provides facial recognition used by EU police✅ Yes (output used in EU)
UK bank uses AI credit scoring for EU customers✅ Yes (deployer in EU)
Chinese manufacturer exports AI-powered medical device to EU✅ Yes (imported to EU market)
EU startup trains AI model but only deploys in Asia❌ No (no EU use)

Article 2(2): Scope for EU Institutions

AI Act applies to EU institutions, bodies, offices and agencies when acting as:

  • Providers
  • Deployers
  • Authorized representatives

No exemptions for public sector AI use.

Article 2(3): Scope for Third Countries

AI Act applies to providers and deployers in third countries where AI system output used in EU.

Effect: Non-EU companies must comply if their AI affects EU.

Article 2(4): Exclusions

AI Act does NOT apply to:

ExclusionReason
Military, defense, national security AICovered by other frameworks
AI for scientific research onlyExcluding putting into service/market
Personal non-professional usePrivate use by individuals

Important limitations:

Military exclusion applies ONLY when:

  • Exclusively for military purposes
  • National security purposes
  • Defense activities

Does NOT exclude:

  • Dual-use AI (civilian + military)
  • AI for law enforcement
  • AI for border control
  • AI for public administration

Article 2(5): Member State Law

AI Act does not affect Member State powers concerning:

  • National security
  • Defense
  • Military activities

But: Member States must respect fundamental rights.

Article 2(6): Relationship with Other EU Law

AI Act applies without prejudice to:

LawRelationship
GDPRData protection rules apply concurrently
Product SafetyMedical devices, machinery, toys - sector rules apply
AviationAviation safety regulations take precedence in specific areas
Financial ServicesPrudential requirements remain
Digital Services ActPlatform obligations continue

Coordination principle:

  • AI Act provides horizontal framework
  • Sector-specific rules apply where they exist
  • Both frameworks apply - must comply with all

Definitions [Article 3]

Core Definitions

AI System [Article 3(1)]

A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Key elements:

  • Machine-based (not purely human)
  • Operates with autonomy
  • May adapt after deployment
  • Infers how to generate outputs
  • Outputs can influence environments

Examples of AI systems:

SystemAI System?Reason
Machine learning model for fraud detection✅ YesInfers patterns, makes predictions
Rule-based expert system✅ YesGenerates decisions based on rules
Traditional software with fixed logic❌ NoNo inference, no autonomy
Statistical model (regression)⚠️ MaybeDepends on autonomy and inference capability
Generative AI (ChatGPT, DALL-E)✅ YesGenerates content, exhibits autonomy

Provider [Article 3(3)]

Natural or legal person that develops an AI system or GPAI model or has it developed, and places it on the market or puts it into service under its own name or trademark.

Provider responsibilities:

  • Compliance with AI Act requirements
  • CE marking (for high-risk systems)
  • Declaration of conformity
  • Post-market monitoring
  • Documentation and transparency

Who is a provider?

ScenarioProvider?
Company develops AI internally for own use✅ Yes (if puts into service)
Company commissions third-party to develop AI✅ Yes (company is provider)
Open-source community develops AI model⚠️ Complex (first commercial user may be provider)
Company fine-tunes existing AI model⚠️ Maybe (if substantial modification)

Deployer [Article 3(4)]

Natural or legal person that uses an AI system under its authority except where AI system used in course of personal non-professional activity.

Deployer responsibilities:

  • Follow instructions for use
  • Human oversight
  • Input data quality
  • Monitor for incidents
  • Cooperate with authorities

Examples:

EntityDeployer?
Hospital using AI diagnostic tool✅ Yes
Employer using AI resume screening✅ Yes
Individual using ChatGPT for work✅ Yes (professional activity)
Individual using ChatGPT at home❌ No (personal use)
Police using facial recognition✅ Yes

Placing on the Market [Article 3(9)]

First making available an AI system or GPAI model on EU market.

Key points:

  • First time available in EU
  • Triggers provider obligations
  • One-time event per system

Putting into Service [Article 3(11)]

Supply of an AI system for first use directly to deployer or for own use in EU.

Difference from placing on market:

  • Putting into service: first use
  • Placing on market: first availability

Examples:

ActionPlacing on Market?Putting into Service?
Selling AI software to customers✅ Yes❌ No (customers put into service)
Deploying AI internally in company❌ No (not made available to others)✅ Yes
Distributing open-source AI⚠️ Complex⚠️ Complex

High-Risk AI System [Article 3(2)]

AI system listed in Annex III (specific use cases) or AI system that is safety component of product covered by Union harmonization legislation.

Two paths to high-risk:

  1. Annex III listing (use case based):

    • Biometric identification
    • Critical infrastructure management
    • Education and vocational training
    • Employment, worker management
    • Access to essential services
    • Law enforcement
    • Migration, asylum, border control
    • Administration of justice
  2. Safety component (product based):

    • Medical devices
    • Machinery
    • Civil aviation
    • Motor vehicles
    • Marine equipment

Consequences of high-risk classification:

  • Full requirements of Title III apply
  • Conformity assessment required
  • CE marking mandatory
  • EU database registration
  • Post-market monitoring

General-Purpose AI Model (GPAI) [Article 3(63)]

AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications.

Characteristics:

  • Not designed for single specific task
  • Can perform multiple distinct tasks
  • Adaptable to various applications
  • Examples: GPT-4, Claude, LLaMA

Does NOT include:

  • AI systems for single specific task
  • Models designed for one narrow application
  • Traditional machine learning models

Systemic Risk [Article 3(65)]

Risk specific to high-impact capabilities of GPAI models that have significant impact on EU market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, society, environment.

Triggers additional obligations for GPAI models with systemic risk.

Other Key Definitions

TermDefinition [Article]
ImporterPerson established in EU placing on market AI system bearing non-EU provider name [3(16)]
DistributorPerson in supply chain other than provider/importer making AI system available [3(17)]
OperatorProvider, deployer, authorized representative, importer, distributor [3(18)]
Authorized representativePerson established in EU with written mandate from non-EU provider [3(15)]
Conformity assessmentProcess demonstrating AI system meets requirements [3(37)]
CE markingMarking indicating AI system conforms to requirements [3(43)]
Post-market monitoringActivities by providers to collect and review experience with AI systems [3(40)]
Market surveillanceActivities by authorities to check AI systems comply with AI Act [3(41)]
RecallMeasure aimed at achieving return of AI system to provider/authorized representative [3(42)]
WithdrawalMeasure aimed at preventing AI system being made available on market [3(43)]

AI Literacy [Article 4]

Article 4(1): Obligation for Providers and Deployers

Providers and deployers shall take measures to ensure sufficient level of AI literacy of their staff and other persons dealing with operation and use of AI systems on their behalf.

“AI literacy” means:

  • Understanding AI capabilities and limitations
  • Awareness of AI risks
  • Knowledge of how to use AI responsibly
  • Ability to interpret AI outputs
  • Understanding of human oversight requirements

Article 4(2): Proportionate Measures

Measures shall take into account:

FactorConsideration
Technical knowledgeExisting expertise of staff
ExperienceLevel of familiarity with AI
EducationEducational background
TrainingPrevious AI training received
ContextNature of AI system being used
Persons affectedWho interacts with AI system

Practical measures:

Staff LevelAppropriate AI Literacy Measures
ExecutivesStrategic understanding of AI risks and benefits, decision-making oversight
Technical staffDeep technical training on AI system operation, monitoring, troubleshooting
Operational staffPractical training on using AI tools, interpreting outputs, escalation procedures
Oversight staffUnderstanding AI limitations, bias detection, human override procedures
Customer-facing staffHow to explain AI decisions to affected persons, handling complaints

Article 4(3): Commission Guidelines

Commission may adopt guidelines on practical implementation of AI literacy provisions.

Expected to cover:

  • Training curricula
  • Competency frameworks
  • Sector-specific guidance
  • Assessment methods

Practical Compliance

Determining Applicability

Checklist for whether AI Act applies:

  1. ✅ Is it an AI system under Article 3(1)?

    • Machine-based with autonomy?
    • Infers how to generate outputs?
    • Influences environments?
  2. ✅ Is it within scope under Article 2?

    • Used in EU or output used in EU?
    • Not exclusively military/defense/national security?
    • Not for personal non-professional use?
  3. ✅ What is your role under Article 3?

    • Provider (develop and place on market)?
    • Deployer (use under your authority)?
    • Importer/distributor/authorized rep?
  4. ✅ What risk category under Articles 5-6?

    • Prohibited practice (Article 5)?
    • High-risk (Annex III or safety component)?
    • Transparency obligation (Articles 50-53)?
    • General-purpose AI model?

Implementing AI Literacy (Article 4)

Steps for compliance:

  1. ✅ Assess current AI literacy levels

    • Survey staff knowledge
    • Identify gaps
    • Prioritize based on roles
  2. ✅ Develop training program

    • General AI awareness for all staff
    • Role-specific technical training
    • Ongoing refresher training
  3. ✅ Document AI literacy measures

    • Training materials and curricula
    • Attendance records
    • Competency assessments
    • Update frequency
  4. ✅ Tailor by role and system

    • Executives: strategic oversight
    • Technical: deep system knowledge
    • Operational: practical use
    • Customer-facing: explanation skills
  5. ✅ Review and update regularly

    • As AI systems evolve
    • As roles change
    • As regulations develop
    • Annual minimum

Common Mistakes

Scope interpretation:

  • Assuming AI Act only applies to EU companies (applies to anyone whose AI is used in EU)
  • Thinking personal use exemption is broad (only non-professional personal use exempt)
  • Believing military exclusion is broad (only exclusive military/defense purposes)

Definition issues:

  • Treating all software as AI systems (must meet Article 3(1) definition)
  • Confusing provider and deployer roles (provider develops/places; deployer uses)
  • Not recognizing when substantial modification makes you a provider

AI literacy:

  • Generic training for all staff (must be tailored to roles and systems)
  • One-time training (must be ongoing and updated)
  • No documentation (must document measures taken)
  • Ignoring proportionality (measures must fit context)

Citation

Sources

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt