EU

EU AI Act: Post-Market Monitoring, Incident Reporting, and Market Surveillance

Post-Market Monitoring, Incident Reporting, and Market Surveillance [Art 72-93]

Rule: Providers of high-risk AI systems must implement continuous post-market monitoring, report serious incidents immediately, and cooperate with market surveillance authorities. Authorities have extensive powers to investigate, test, and enforce compliance.

Effective: August 2, 2026 (high-risk system requirements)

Overview

Chapter IX of the AI Act establishes a comprehensive post-market surveillance framework ensuring high-risk AI systems remain compliant throughout their operational lifetime.

Three pillars:

  1. Post-market monitoring: Providers continuously collect and analyze performance data
  2. Serious incident reporting: Mandatory immediate reporting of incidents causing harm
  3. Market surveillance: Authorities monitor compliance and take enforcement action

Chapter IX Structure

SectionArticlesCoverage
Section 172-73Post-market monitoring and serious incident reporting
Section 274-80Market surveillance, procedures for non-compliant systems
Section 381-84Union safeguard procedure, AI systems presenting risk
Section 485-87Remedies (complaints, explanations, whistleblower protection)
Section 588-93Supervision and enforcement of general-purpose AI models

Section 1: Post-Market Monitoring and Incident Reporting [Art 72-73]

Article 72: Post-Market Monitoring System

72.1 — Core Requirement

Providers of high-risk AI systems must establish, document, implement, and maintain a post-market monitoring system.

Proportionality: System must be proportionate to:

  • Nature of AI technologies
  • Risks posed by system
  • Intended purpose

72.2 — Data Collection and Analysis

Post-market monitoring must actively and systematically:

ActivityDescription
Collect dataGather information on system performance in real-world use
DocumentRecord incidents, failures, user feedback
AnalyzeExamine data for patterns, risks, compliance issues

Purpose: Verify that AI system continues to comply with Chapter III, Section 2 requirements throughout operational lifetime.

72.3 — Scope of Monitoring

Must monitor:

  • System performance: Accuracy, robustness, outputs
  • User interactions: How humans oversight system, override decisions
  • Incidents and malfunctions: Errors, anomalies, unexpected behavior
  • Interactions with other AI systems: If system operates alongside or integrates with other AI
  • Impact on fundamental rights: Evidence of discrimination, bias, harm

72.4 — Post-Market Monitoring Plan

Requirement: Providers must prepare written post-market monitoring plan as part of technical documentation (Annex IV).

Plan must include:

ComponentDetails
Data collection strategyWhat data will be collected, from where, how often
Analysis methodologyHow data will be analyzed, metrics used
Incident handlingProcedures for detecting and responding to issues
Update proceduresHow findings inform system updates, retraining
ReportingInternal reporting to management, external to authorities
ResourcesPersonnel, tools, budget for monitoring activities

72.5 — Template and Guidelines

Commission obligation: Provide template for post-market monitoring plan via implementing acts.

Deadline: By February 2, 2026 (6 months before high-risk system requirements apply).

Purpose: Harmonize monitoring approaches across EU, simplify compliance.

72.6 — Integration with Existing Systems

Providers may integrate AI Act post-market monitoring into existing systems required by:

  • Other Union harmonization legislation (Annex I, Section A)
  • Financial services regulation (for financial institutions)

Condition: Integration must ensure equivalent level of protection.

72.7 — Law Enforcement Exemption

Sensitive operational data from law enforcement deployers is exempt from collection requirements.

Rationale: Protect ongoing investigations, operational security.

Article 73: Reporting of Serious Incidents

73.1 — Definition of Serious Incident

Serious incident: Any incident that directly or indirectly leads to:

  • Death of a person
  • Serious damage to health of a person
  • Serious and irreversible disruption of management/operation of critical infrastructure

Causal link: Incident must be caused by or linked to AI system use.

73.2 — Reporting Obligation

Who reports: Providers of high-risk AI systems

To whom:

  • Market surveillance authorities in Member States where incident occurred
  • Notified body that issued certificate (if applicable)

When: Immediately after provider establishes causal link (or reasonable likelihood) between AI system and incident.

73.3 — Reporting Timelines

Incident TypeReporting Deadline
Death of personWithin 10 days of becoming aware
Widespread infringement or serious incidentWithin 2 days of discovery
Other serious incidentsWithin 15 days after establishing causal link
All casesImmediately upon establishing causal relationship

Clock starts: When provider becomes aware of incident OR establishes causal link (whichever earlier).

73.4 — Incident Report Contents

Report must include:

InformationDetails
Provider identificationName, address, contact details, authorized representative
System identificationSystem name, type, version, CE marking, registration number
Incident descriptionWhat happened, when, where, who affected
Causal analysisWhy provider believes AI system caused or contributed
Harm assessmentNature and extent of harm (death, health damage, infrastructure disruption)
Affected personsNumber and categories of persons impacted
Geographic scopeWhere incident occurred, where system is deployed
Investigation statusWhat investigation provider has conducted
Corrective actionsWhat provider is doing to prevent recurrence

73.5 — Provider Obligations Following Report

After notifying authorities, provider must:

Investigate:

  • Conduct investigation of incident and AI system
  • Determine root cause
  • Assess whether similar incidents could occur elsewhere

Assess risk:

  • Perform risk assessment of incident
  • Identify whether systemic issues exist
  • Evaluate need for wider corrective action

Take corrective actions:

  • Implement measures to prevent recurrence
  • Update system if necessary
  • Retrain model if appropriate
  • Update instructions for use

Cooperate:

  • Work with market surveillance authorities
  • Provide information to notified bodies
  • Coordinate with deployers affected

Preserve evidence:

  • Do NOT alter AI system in ways that could impair investigation
  • Preserve logs and documentation
  • Maintain technical documentation

73.6 — Authority Response

Market surveillance authority obligations:

Within 7 days:

  • Take appropriate measures in response to notification
  • Follow procedures under Regulation (EU) 2019/1020
  • Assess whether system presents risk requiring withdrawal

Immediately:

  • Inform European Commission of serious incident
  • Inform other Member States if incident has cross-border implications

Ongoing:

  • Monitor provider’s corrective actions
  • Evaluate effectiveness of measures
  • Coordinate with other authorities if needed

73.7 — Integration with Sectoral Legislation

Medical devices: For AI systems that are medical devices or components, reporting follows:

  • Medical Device Regulation (EU) 2017/745
  • In Vitro Diagnostic Regulation (EU) 2017/746

Other sectors: AI systems covered by specific Union legislation follow those reporting requirements, with AI Act provisions applying complementarily.

73.8 — Public Reporting

Transparency: Commission and Member States must make serious incident information publicly available (with confidential information redacted).

Purpose: Inform other providers, deployers, affected persons about systemic risks.

Section 2: Market Surveillance and Control [Art 74-80]

Article 74: Market Surveillance Framework

74.1 — Application of Market Surveillance Regulation

Regulation (EU) 2019/1020 applies to AI systems covered by AI Act.

Market surveillance authorities have same powers for AI systems as for other products under New Legislative Framework.

74.2 — Designated Authorities

Authority designation varies by AI system type:

AI System TypeMarket Surveillance Authority
High-risk AI in Union harmonization productsAuthority responsible for that product sector
Financial institutions’ AIRelevant financial supervisory authority (ECB, ESMA, EIOPA, etc.)
Law enforcement AIData protection authority OR designated authority
Migration/asylum/border control AIData protection authority OR designated authority
Judicial system AIData protection authority OR designated authority
Union institutions’ AIEuropean Data Protection Supervisor (EDPS)
Other high-risk AINational market surveillance authority

74.3 — Market Surveillance Powers

Authorities may:

PowerDescription
Request documentationTechnical documentation, conformity assessment reports
Access training dataReview datasets used for training, validation, testing
Request logsAccess automatically generated logs
Conduct testingTest system with own data, simulate use cases
Request source codeAccess code ONLY if other methods exhausted and insufficient
Inspect premisesOn-site inspections of provider facilities
Interview personnelSpeak with developers, quality managers, oversight persons
Remote monitoringMonitor systems remotely where appropriate

74.4 — Source Code Access Restrictions

Source code access ONLY when BOTH conditions met:

  1. Necessary to assess conformity with Chapter III, Section 2 requirements, AND
  2. Other methods exhausted — testing and auditing based on documentation insufficient

Purpose: Protect trade secrets while ensuring authorities can verify compliance.

Safeguards:

  • Code must be kept confidential
  • Used only for conformity assessment
  • Not disclosed to third parties
  • Returned or destroyed after assessment

74.5 — Annual Reporting

Market surveillance authorities must report annually to Commission on:

  • Prohibited AI practices identified (Article 5)
  • Enforcement measures taken
  • Number of systems inspected
  • Serious incidents reported
  • Cross-border cooperation activities

Article 75: Mutual Assistance for General-Purpose AI

75.1 — AI Office Role

AI Office (within European Commission) supervises general-purpose AI models with systemic risk.

For AI systems based on GPAI models, AI Office has powers to:

  • Monitor compliance
  • Request information
  • Coordinate with national authorities

75.2 — Cooperation Procedure

When market surveillance authority cannot access information about GPAI model underlying high-risk system:

  1. Authority submits reasoned request to AI Office
  2. AI Office requests information from GPAI model provider
  3. GPAI provider supplies information to AI Office
  4. AI Office shares relevant information with requesting authority
  5. Timeline: AI Office responds within 30 days

Article 76: Supervision of Real-World Testing

76.1 — Authority Competences

Market surveillance authorities oversee testing in real-world conditions (Article 60).

Ensure testing complies with:

  • Informed consent requirements
  • Protection of fundamental rights
  • Safety and cybersecurity safeguards
  • Notification requirements

76.2 — Regulatory Sandboxes

For testing within AI regulatory sandboxes (Article 58):

  • Market surveillance authority verifies compliance with Article 60
  • May allow testing in derogation from certain requirements
  • Monitors testing throughout sandbox period

Article 77: Powers of Fundamental Rights Authorities

77.1 — Scope

National public authorities supervising fundamental rights (e.g., equality bodies, human rights commissions) have power to:

Request documentation:

  • Technical documentation
  • Risk management documentation
  • Fundamental rights impact assessments (FRIA)

When: Necessary to fulfill their mandate of protecting fundamental rights.

77.2 — Testing Powers

If documentation insufficient to determine fundamental rights breach:

  • Authority may request testing of AI system
  • Provider must cooperate
  • Testing focuses on discrimination, bias, fundamental rights impacts

77.3 — Confidentiality

Information obtained must be treated confidentially per Article 78.

Article 78: Confidentiality

78.1 — Confidentiality Obligations

All authorities, notified bodies, and other entities involved in AI Act application must respect confidentiality of:

  • Trade secrets
  • Intellectual property rights
  • Confidential business information
  • Source code (except where disclosure necessary)
  • Security information
  • Public security and defense interests

78.2 — Cybersecurity Measures

Authorities must implement technical and organizational measures to protect:

  • Confidential information obtained
  • Data security
  • Intellectual property

78.3 — Information Exchange

When confidential information exchanged between authorities:

  • Sending authority must indicate confidentiality
  • Receiving authority must protect accordingly
  • Use limited to purpose for which shared
  • No further disclosure without consent

Article 79: Procedure for AI Systems Presenting Risk

79.1 — Definition of Risk

AI system presenting risk: System that presents risks to:

  • Health or safety of persons
  • Fundamental rights of persons
  • Particularly vulnerable groups (children, elderly, persons with disabilities)

Applies same definition as “product presenting risk” under Regulation (EU) 2019/1020.

79.2 — Initial Assessment

When market surveillance authority has sufficient reason to consider AI system presents risk:

Step 1: Evaluation

  • Assess compliance with ALL AI Act requirements
  • Pay particular attention to risks to vulnerable groups
  • Examine technical documentation
  • Review logs and incident reports

Step 2: Decision

  • If compliant → Close case
  • If non-compliant → Proceed to corrective actions

79.3 — Corrective Action Procedure

If AI system does NOT comply:

Authority action:

  • Require operator (provider/deployer) to take corrective actions
  • Set deadline: 15 working days OR shorter per sectoral legislation
  • Specify required actions (bring into compliance, withdraw, recall)

Operator obligations:

  • Take all appropriate corrective actions
  • Bring system into compliance
  • Withdraw from market if necessary
  • Recall system if necessary
  • Report back to authority

79.4 — Provisional Measures

If operator does NOT take adequate corrective action within deadline:

Authority must:

  • Take provisional measures to prohibit or restrict system on national market
  • Notify European Commission without undue delay
  • Notify other Member States without undue delay
  • Specify reasons, evidence, and duration of measures

Notification must include:

  • System identification
  • Operator details
  • Non-compliance identified
  • Corrective actions required (if any)
  • Measures taken
  • Duration of measures

79.5 — Objection Period

Standard: 3 months for Commission or other Member States to object

Reduced for Article 5 violations: 30 days for prohibited practices

If no objection: Measure deemed justified, all Member States must take appropriate action

If objection raised: Commission evaluates per Article 81 (Union safeguard procedure)

Article 80: Non-High-Risk Reclassification

Covered in detail in database-registration.md under Article 80.

Summary: Market surveillance can reclassify self-assessed non-high-risk systems as high-risk if evidence supports reclassification.

Section 3: Union Safeguard Procedure [Art 81-84]

Article 81: Union Safeguard Procedure

81.1 — When Procedure Triggered

When Member State or Commission objects to provisional measure under Article 79:

Process:

  1. Commission assesses measure within reasonable time
  2. Commission consults Member States and relevant operators
  3. Commission evaluates whether measure is justified

81.2 — Commission Decision

If measure justified:

  • Commission confirms measure
  • All Member States must take necessary action (withdraw/recall system)
  • Measure becomes EU-wide

If measure NOT justified:

  • Commission requires Member State to withdraw measure
  • System may return to market in that Member State

81.3 — Widespread Non-Compliance

If Commission identifies widespread non-compliance across EU:

  • May take Union-level enforcement measures
  • May require withdrawal across all Member States
  • May impose Union-wide restrictions

Article 82: Compliant AI Systems Presenting Risk

82.1 — Scenario

AI system that complies with AI Act but still presents risk to health, safety, or fundamental rights.

Examples:

  • System technically compliant but used in unforeseeable way causing harm
  • Emerging risk not covered by existing requirements

82.2 — Authority Action

Market surveillance authority must:

  • Require operator to eliminate risk
  • Withdraw or recall system if risk cannot be eliminated
  • Notify Commission and other Member States

82.3 — Commission Response

Commission may:

  • Update common specifications (Article 41)
  • Request standard-setting organizations to revise standards
  • Amend Annexes via delegated acts

Article 83: Formal Non-Compliance

83.1 — Definition

System non-compliant with formal requirements but no immediate risk:

  • CE marking affixed incorrectly
  • EU declaration of conformity missing elements
  • Registration incomplete
  • Instructions for use inadequate

83.2 — Authority Action

If formal non-compliance persists after notification:

  • Authority may restrict market access until corrected
  • No need for Union safeguard procedure
  • National measure sufficient

Article 84: Union AI Testing Support Structures

84.1 — Purpose

Commission may establish EU-level testing facilities to support:

  • Market surveillance authorities
  • Notified bodies
  • Providers (voluntary testing)

84.2 — Services

Testing structures provide:

  • Technical expertise on AI testing
  • Access to testing tools and datasets
  • Standardized testing methodologies
  • Training for authorities

Section 4: Remedies [Art 85-87]

Article 85: Right to Lodge Complaint

85.1 — Who Can Complain

Any natural or legal person having grounds to consider AI system infringes AI Act.

Examples:

  • Individual affected by discriminatory decision
  • Consumer receiving inadequate transparency
  • NGO observing fundamental rights violations

85.2 — Where to Complain

Market surveillance authority in Member State where:

  • Person is located, OR
  • Alleged infringement occurred

85.3 — Authority Obligations

Authority must:

  • Acknowledge receipt
  • Investigate complaint (if admissible and substantiated)
  • Inform complainant of outcome
  • Take action if infringement confirmed

Article 86: Right to Explanation of Decisions

86.1 — Scope

Applies to deployers of high-risk AI systems that:

  • Make decisions about natural persons, OR
  • Assist in making decisions about natural persons

Covered areas: Employment, credit, education, law enforcement, public services, etc.

86.2 — Explanation Content

Upon request, affected person entitled to:

  • Clear explanation in understandable language
  • Role of AI system in decision-making process
  • Main elements considered by system
  • Logic of decision (at high level, not trade secrets)
  • Consequences for person
  • Right to challenge or request human review

86.3 — Timeline

Deployer must provide explanation without undue delay.

86.4 — Exceptions

No right to explanation when:

  • Law enforcement investigation would be prejudiced
  • National security considerations
  • Public security requires confidentiality

Article 87: Reporting of Infringements and Whistleblower Protection

87.1 — Protection Directive Application

Directive (EU) 2019/1937 (whistleblower protection) applies to reporting AI Act infringements.

87.2 — Protected Persons

  • Employees of providers, deployers
  • Contractors, consultants
  • Volunteers, interns
  • Anyone with access to information through work

87.3 — Reporting Channels

Internal: Company compliance mechanisms

External:

  • Market surveillance authorities
  • European Commission
  • Data protection authorities

Public disclosure: As last resort under whistleblower directive conditions

87.4 — Protections

  • No retaliation (dismissal, demotion, harassment)
  • Confidentiality of identity
  • Protection from legal liability (defamation, breach of confidentiality)

Section 5: Supervision of General-Purpose AI [Art 88-93]

Article 88: Enforcement of GPAI Obligations

88.1 — AI Office Powers

AI Office (European Commission) supervises compliance of general-purpose AI model providers with Chapter V obligations.

Models covered:

  • General-purpose AI models (all)
  • GPAI models with systemic risk (>10^25 FLOPs) — enhanced supervision

88.2 — Supervisory Powers

AI Office may:

  • Request information and documentation
  • Conduct evaluations
  • Request measures to ensure compliance
  • Impose fines for non-compliance

Article 89: Monitoring Actions

89.1 — Continuous Monitoring

AI Office monitors GPAI model providers for:

  • Compliance with transparency obligations
  • Systemic risk assessment and mitigation
  • Incident reporting
  • Cybersecurity measures

89.2 — Information Sources

Monitoring based on:

  • Provider self-disclosures
  • Scientific Panel alerts (Article 90)
  • Reports from downstream providers
  • Market intelligence

Article 90: Scientific Panel Alerts

90.1 — Panel Role

Scientific Panel of Independent Experts may issue alerts to AI Office about:

  • Potential systemic risks from GPAI models
  • New risks emerging from model capabilities
  • Inadequate risk mitigation by providers

90.2 — Alert Triggers

Panel considers:

  • Model capabilities (reasoning, code generation, multimodality)
  • Number of users and applications
  • Potential for misuse
  • Cybersecurity vulnerabilities

90.3 — AI Office Response

Upon alert, AI Office must:

  • Investigate immediately
  • Request information from provider
  • Evaluate risks
  • Require mitigation measures if necessary

Article 91: Power to Request Documentation

91.1 — Information Requests

AI Office may request from GPAI providers:

  • Technical documentation
  • Training data information (not data itself unless necessary)
  • Model architecture details
  • Risk assessments
  • Testing and evaluation results
  • Incident reports

91.2 — Response Timeline

Providers must respond within reasonable period specified by AI Office (typically 30-60 days).

91.3 — Confidentiality

AI Office must protect trade secrets and confidential business information.

Article 92: Power to Conduct Evaluations

92.1 — Evaluation Types

AI Office may conduct:

  • Document review: Assessment of written documentation
  • Model testing: Technical evaluation of model capabilities
  • On-site inspections: Visit provider facilities
  • Expert evaluations: Commission independent experts

92.2 — Provider Cooperation

Providers must:

  • Grant access to premises
  • Provide access to model for testing
  • Supply technical information
  • Facilitate interviews with personnel

Article 93: Power to Request Measures

93.1 — Corrective Measures

If AI Office identifies non-compliance or systemic risk, may require provider to:

  • Implement risk mitigation measures
  • Update model documentation
  • Improve transparency
  • Restrict model access
  • Suspend model deployment (extreme cases)

93.2 — Timeline

Providers must implement measures within reasonable period specified by AI Office.

93.3 — Enforcement

Non-compliance with AI Office measures → Article 101 penalties (up to €15M or 3% global revenue).

Compliance Checklist

For Providers (High-Risk AI Systems)

Post-Market Monitoring (Art 72)

  • Establish post-market monitoring system proportionate to risks
  • Actively and systematically collect performance data
  • Document all incidents, malfunctions, user feedback
  • Analyze data for compliance with Section 2 requirements
  • Prepare post-market monitoring plan (part of technical documentation)
  • Implement data collection from real-world deployment
  • Monitor interactions with other AI systems (if applicable)
  • Track fundamental rights impacts
  • Update system based on monitoring findings
  • Integrate with existing monitoring systems (if permitted)

Serious Incident Reporting (Art 73)

  • Establish incident detection procedures
  • Classify incidents by severity (death, serious health, infrastructure)
  • Report deaths within 10 days
  • Report widespread/serious incidents within 2 days
  • Report other serious incidents within 15 days
  • Include all required information in reports
  • Investigate root causes
  • Implement corrective actions
  • Cooperate with authorities
  • Preserve evidence (do not alter system during investigation)

Authority Cooperation (Art 74, 79)

  • Respond to authority information requests without undue delay
  • Provide technical documentation upon request
  • Grant access to logs, training data, validation data
  • Facilitate testing by authorities
  • Provide source code if other methods exhausted
  • Take corrective actions within 15 working days when required
  • Withdraw or recall systems when non-compliant

For Deployers (High-Risk AI Systems)

Monitoring During Use (Art 26)

  • Monitor system operation according to instructions
  • Retain logs for at least 6 months
  • Report suspected risks to provider immediately
  • Cease use if system presents risk

Incident Reporting (Art 26, 73)

  • Immediately inform provider of serious incidents
  • Immediately inform market surveillance authority
  • Provide information about incident context

Complaint Handling (Art 85)

  • Establish procedures for individuals to request explanations (Art 86)
  • Respond to explanation requests without undue delay
  • Cooperate with authority investigations of complaints

For Market Surveillance Authorities

Ongoing Surveillance (Art 74)

  • Monitor AI systems in jurisdiction
  • Conduct risk-based inspections
  • Review serious incident reports
  • Investigate complaints
  • Annual reporting to Commission

Non-Compliance Procedures (Art 79)

  • Evaluate systems when sufficient reason to suspect risk
  • Require corrective actions within 15 working days
  • Take provisional measures if actions inadequate
  • Notify Commission and other Member States
  • Follow Union safeguard procedure

Cooperation (Art 75, 81)

  • Coordinate with other Member State authorities
  • Request assistance from AI Office for GPAI issues
  • Participate in Union safeguard procedure

Penalties for Non-Compliance

ViolationFine (Large Companies)Fine (SMEs)
Failure to establish post-market monitoring (Art 72)Up to €15M or 3% global revenueUp to €3M or 3% global revenue
Failure to report serious incidents (Art 73)Up to €15M or 3% global revenueUp to €3M or 3% global revenue
Non-cooperation with authorities (Art 74, 79)Up to €7.5M or 1.5% global revenueUp to €1.5M or 1.5% global revenue
Supplying incorrect/incomplete informationUp to €7.5M or 1.5% global revenueUp to €1.5M or 1.5% global revenue
GPAI non-compliance (Art 88-93)Up to €15M or 3% global revenueUp to €3M or 3% global revenue

Article 99 and 101 basis: Administrative fines for infringements of obligations.

Timeline Summary

DateMilestone
August 1, 2024AI Act enters into force
February 2, 2026Commission publishes post-market monitoring plan template
August 2, 2026Post-market monitoring and serious incident reporting obligations apply
Within 15 daysStandard serious incident reporting deadline
Within 10 daysSerious incident reporting (death)
Within 2 daysSerious incident reporting (widespread/serious)
Within 15 working daysCorrective action deadline when authority requires
Within 7 daysAuthority response to serious incident notification

Practical Guidance

Setting Up Post-Market Monitoring

Step 1: Define scope

  • Identify all high-risk AI systems requiring monitoring
  • Determine data sources (logs, user feedback, deployer reports)
  • Define monitoring metrics aligned with Section 2 requirements

Step 2: Establish data collection

  • Implement automated log aggregation
  • Create feedback channels for users/deployers
  • Set up incident reporting systems

Step 3: Analysis procedures

  • Define KPIs for accuracy, robustness, fundamental rights
  • Establish statistical analysis methods
  • Create dashboards for real-time monitoring

Step 4: Response procedures

  • Define thresholds for triggering investigation
  • Establish corrective action workflows
  • Create incident escalation paths

Step 5: Documentation

  • Write post-market monitoring plan per Annex IV
  • Document all findings and actions taken
  • Maintain audit trail for authorities

Incident Response Protocol

Detection:

  1. Monitor logs, user reports, deployer notifications for incidents
  2. Classify severity: death, serious health damage, infrastructure disruption
  3. Establish preliminary causal link to AI system

Immediate actions:

  1. Preserve evidence (logs, system state, inputs/outputs)
  2. Notify affected deployers
  3. Assess immediate risk to other deployments

Investigation:

  1. Conduct root cause analysis
  2. Determine if incident isolated or systemic
  3. Assess causal relationship between AI and harm

Reporting:

  1. Prepare incident report with all required information
  2. Submit to market surveillance authority within applicable deadline
  3. Notify notified body (if applicable)

Corrective actions:

  1. Implement fixes to prevent recurrence
  2. Update system if necessary
  3. Communicate with all deployers
  4. Update instructions for use if needed

Follow-up:

  1. Verify effectiveness of corrective actions
  2. Report back to authority
  3. Update post-market monitoring plan

Citation

Chapter IX — Post-Market Monitoring, Information Sharing and Market Surveillance, Regulation (EU) 2024/1689

Related:

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt