We use cookies to improve your experience and analyse site traffic.
Article 73 requires providers to report serious incidents to the market surveillance authority with strict timelines from 2 to 15 days. This section covers the Article 3(49) definition, the three reporting tiers, building an operational response capability, evidence preservation, and managing legal privilege for the incident register.
Article 73 is the most operationally demanding obligation in the post-market phase, requiring providers to report serious incidents to the market surveillance authority of the member state where the incident occurred, with strict timelines that leave little room for internal deliberation.
Article 73 is the most operationally demanding obligation in the post-market phase, requiring providers to report serious incidents to the market surveillance authority of the member state where the incident occurred, with strict timelines that leave little room for internal deliberation. Article 3(49) defines a serious incident as an incident or malfunction that directly or indirectly results in death or serious harm to a person's health, serious and irreversible disruption to the management or operation of critical infrastructure, infringement of obligations under EU law intended to protect fundamental rights, or serious harm to property or the environment.
The Commission's September 2025 draft guidance clarifies several interpretive points. Serious harm to health includes life-threatening illness, temporary or permanent impairment of bodily function, conditions necessitating or prolonging hospitalisation, and medical interventions required to prevent such outcomes. Serious and irreversible disruption of critical infrastructure requires both seriousness, meaning imminent threat to life or physical safety, and irreversibility, meaning physical infrastructure requiring reconstruction, essential data irrecoverable, or specialised equipment irreparably damaged. Fundamental rights infringements must significantly interfere with Charter-protected rights on a large scale, establishing a high threshold intended to prevent trivial reporting.
Examples of serious incidents include recruitment systems systematically discriminating based on ethnicity or gender across a significant number of applicants, credit scoring systems categorically rejecting individuals from specific neighbourhoods in a pattern that constitutes large-scale fundamental rights infringement, and clinical decision support systems providing incorrect analysis that leads to delayed treatment for multiple patients. Indirect causation is sufficient under Article 3(49): an AI system providing incorrect medical analysis that leads to patient harm through subsequent physician decisions constitutes an indirect serious incident, because the AI system's output was a contributing factor in the chain of causation even though the physician made the final decision.
The reporting regime is tiered by severity.
The reporting regime is tiered by severity. Widespread infringement of fundamental rights or serious and irreversible disruption to critical infrastructure requires reporting within 2 days from awareness under Article 73(1)(a). Death of a person or suspected causal link to a death requires reporting within 10 days under Article 73(1)(b). All other serious incidents meeting the Article 3(49) definition require reporting within 15 days under Article 73(1)(c).
The clock starts when the provider becomes aware of the incident and establishes a causal link, or reasonable likelihood of one, between the AI system and the harm. In practice, awareness often develops gradually: a deployer reports an anomaly, the engineering team investigates, the investigation reveals a potential causal link, and the legal team confirms that the incident meets the Article 3(49) definition. Organisations must design their internal processes to compress this chain, because the two-day and ten-day timelines leave almost no margin.
Article 73(5) explicitly permits the submission of an initial, incomplete report followed by supplementary information as the investigation progresses. This is a practical concession: the Commission recognises that full root cause analysis cannot be completed within two days. The initial report should contain the provider's identity and contact details, the system's identity including its EU database registration reference, a description of the incident covering what happened, when it happened, where it occurred, and who was affected or potentially affected, the suspected causal link between the AI system's output or behaviour and the harm, and the immediate actions the provider has taken to contain the harm or prevent further harm. Supplementary reports update the authority as the investigation produces findings, corrective actions are implemented, and the full scope of impact becomes clear.
The serious incident reporting obligation cannot be met by a policy document alone.
The serious incident reporting obligation cannot be met by a policy document alone. It requires an operational capability that functions under pressure, across time zones, and outside business hours. Four operational components are required.
The serious incident detection, triage, and evidence preservation pipeline must operate fast enough to meet the reporting timelines, which means the entire chain from event detection to regulatory submission must be designed and rehearsed in advance. Detection infrastructure requires the Technical SME to configure the PMM system with alert rules specifically mapped to Article 3(49) criteria. Alert configurations should include critical-severity thresholds corresponding to potential serious incidents: a fairness metric breach exceeding a magnitude that could affect fundamental rights, a performance degradation in a safety-critical function that could affect health, and a security breach that could compromise critical infrastructure. These thresholds should be more sensitive than standard PMM thresholds, because it is better to triage a false positive than to miss a genuine serious incident. Automated alerts should also trigger when deployers report unexpected or harmful outcomes, when complaint volumes spike suggesting systematic harm, and when monitoring metrics breach critical thresholds suggesting real-world impact.
The triage process is triggered by any detection event and follows a documented decision tree. It assesses whether the event meets the Article 3(49) definition by examining three questions: is there evidence of actual or potential harm to individuals, does the harm meet the Article 3(49) severity criteria, and is the harm attributable to the AI system's output or behaviour. The triage classifies the severity tier as two-day, ten-day, or fifteen-day reporting, assigns an incident lead typically a senior Technical SME, notifies the AI Governance Lead and Legal and Regulatory Advisor immediately, and initiates evidence preservation covering system logs, model state, and data snapshots. Triage should be completed within 24 hours of detection, performed by the incident response team with input from the AI Governance Lead and legal counsel. Each triage determination, whether serious incident confirmed or ruled out, is documented by the AI System Assessor with the evidence considered and the rationale regardless of the conclusion. This documentation is retained as part of the incident register evidence.
Article 73(6) explicitly prohibits the provider from altering the AI system in a way that could affect subsequent evaluation of the incident's causes prior to informing the competent authorities.
Article 73(6) explicitly prohibits the provider from altering the AI system in a way that could affect subsequent evaluation of the incident's causes prior to informing the competent authorities. This prohibition is absolute: the system's state must be preserved as forensic evidence before any remediation takes place. The engineering team preserves the system's state at the time of the incident: the exact model version including its hash and registry reference, the complete configuration including thresholds and feature flags, the input data that the system processed during the incident period, the output data the system produced, the feature values computed during inference, and all relevant logs from every layer of the system architecture.
If the system is operating in a way that continues to cause harm, there is a tension between the evidence preservation requirement and the duty to prevent further harm. The break-glass procedure should be activated to stop processing, but the system's state must be captured before or simultaneously with the stop action. The automated snapshot mechanism is the resolution: it captures the forensic record within minutes of the trigger, enabling the break-glass to proceed without losing evidence.
Automated snapshot scripts should trigger on any critical-severity alert, capturing the current model state including version, configuration, and parameters, the inference logs for the incident period, the monitoring metrics, the data pipeline state, and the system configuration. The snapshot is written to immutable storage within minutes of the trigger. This evidence is essential for two purposes: the regulatory notification, which must include a description of the incident and the AI system involved, and the potential subsequent investigation by the competent authority.
The reporting execution process follows a defined chain of review and approval.
The reporting execution process follows a defined chain of review and approval. The incident lead prepares the initial report using the Commission's reporting template, drawing on the evidence snapshot and the triage assessment to populate the required fields. The Legal and Regulatory Advisor reviews the report for completeness, accuracy, and legal exposure, ensuring that the description of the incident and the suspected causal link are factually precise without inadvertently conceding liability. Once reviewed, the AI Governance Lead authorises submission and takes responsibility for the report's content.
The report is submitted to the market surveillance authority of the member state where the incident occurred. If the incident occurred in multiple member states, the Legal and Regulatory Advisor submits reports to each relevant authority, coordinating the submissions to ensure consistency. Where the provider has a relationship with the competent authority through prior registration or inspection, the Conformity Assessment Coordinator should use the established communication channel to ensure the report reaches the correct team.
Following the initial report, Article 73(6) requires the provider to investigate, assess the incident's risk, and take corrective steps. The investigation must determine three things. First, the root cause: whether the incident arose from a data issue such as training data bias or drift, a model deficiency such as degraded performance or adversarial vulnerability, an integration error such as a configuration mismatch between the model and the deployment environment, a deployment setup issue, a human oversight failure where operators did not detect or intervene in the harmful output, or an external factor outside the system's control. Second, the scope of impact: how many persons were affected, which demographic subgroups were disproportionately impacted, and in which member states the harm occurred. Third, the appropriate remedy: whether the situation requires a model fix, a data correction, a setup change, deployment restrictions limiting the system to lower-risk contexts, system withdrawal from the market entirely, or enhanced human oversight with additional safeguards.
Organisations should maintain a serious incident reporting register that tracks every event assessed against the Article 3(49) criteria, regardless of whether it was ultimately determined to constitute a serious incident.
Organisations should maintain a serious incident reporting register that tracks every event assessed against the Article 3(49) criteria, regardless of whether it was ultimately determined to constitute a serious incident. The register records the event description, the date of detection, the date of awareness as defined by Article 73, the triage outcome as serious incident confirmed, ruled out, or under investigation, the severity tier and applicable reporting deadline, the submission date and recipient authority, the investigation status and findings, the corrective actions taken, and the closure date.
The register serves multiple purposes that extend beyond the immediate reporting obligation. It demonstrates to market surveillance authorities that the organisation has a functioning incident detection and assessment process, which is itself a QMS requirement under Article 17. It provides an evidence base for trend analysis: recurring near-miss events may indicate a systemic weakness that requires proactive intervention before a serious incident occurs, and the pattern analysis capability transforms the register from a compliance record into a continuous improvement tool. It protects the organisation in the event of a dispute about whether a reporting obligation was met, by documenting the assessment process and the reasoning behind each determination with the evidence that supported it.
The incident register's candid documentation of near-miss events, triage reasoning, and internal assessments creates a tension between regulatory transparency and legal exposure.
The incident register's candid documentation of near-miss events, triage reasoning, and internal assessments creates a tension between regulatory transparency and legal exposure. In civil proceedings, particularly under the AI Liability Directive proposal or national tort law, the register could become discoverable evidence. An entry recording that potential systematic discrimination was identified but assessed as below the serious incident threshold could be used to argue the organisation was aware of a problem and chose not to act.
Organisations should consult with their Legal and Regulatory Advisor on the application of legal professional privilege to the incident register in each jurisdiction where the system is deployed. In many EU jurisdictions, documents prepared to support obtaining or providing legal advice attract privilege. A register maintained under the direction of the Legal and Regulatory Advisor, to support assessing the organisation's legal obligations under Article 73, has a stronger privilege claim than one maintained by the engineering team as a general operational log.
Practical measures to protect privilege include establishing the register under the Legal and Regulatory Advisor's authority with a documented purpose of assessing legal reporting obligations. Factual operational data, which is not privileged, should be separated from legal analysis and assessment, which may be. Privileged entries should be labelled clearly. Access to the legal assessment layer should be restricted to persons who need it to support providing or receiving legal advice. Voluntary disclosure of privileged content should be avoided in contexts where privilege could be waived.
Yes. Article 73(5) explicitly permits an initial incomplete report followed by supplementary information. The Commission recognises that full root cause analysis cannot be completed within two days.
When the provider becomes aware of the incident and establishes a causal link or reasonable likelihood of one. In practice, awareness develops gradually as investigation reveals the connection.
Yes. A single incident can trigger reporting under the AI Act, NIS2, GDPR breach notification, and sector-specific legislation simultaneously, each with its own deadline and recipient authority.
Establish under Legal Advisor authority, separate factual data from legal analysis, label privileged entries, restrict access, and avoid voluntary disclosure. Privilege rules vary by jurisdiction.
The notification package should be pre-templated. A structured template covering the required information, including system identification, incident description, affected persons estimate, timeline, corrective actions taken, and corrective actions planned, is prepared by the Conformity Assessment Coordinator in advance, with fields populated from the incident management system. The template reduces the time between triage determination and regulatory submission. Incident management tools such as Incident.io and FireHydrant provide structured incident timelines that can be exported into the notification template, ensuring that the narrative is accurate and chronologically ordered.
The AI Governance Lead documents and communicates the corrective action to the competent authority as a supplement to the initial report. If the corrective action involves a substantial modification to the system, a new conformity assessment may be required under the framework described in version control.
High-risk AI systems in sectors with existing equivalent reporting obligations, such as NIS2 for critical infrastructure, DORA for financial services, or medical device vigilance regulations, have simplified AI Act reporting requirements. Under Article 73(9), the AI Act reporting obligation is limited to fundamental rights infringements as defined in Article 3(49)(c); other serious incidents are reported through the sector-specific regime. Organisations operating under multiple reporting regimes must map the overlap, identify which incidents trigger which reporting obligations, and ensure that internal processes route incidents to the correct authority through the correct channel within the correct timeline. A single incident may trigger reporting under the AI Act, NIS2, GDPR data breach notification, and sector-specific legislation simultaneously, each with its own deadline and recipient authority.
None of these measures guarantee privilege protection, and privilege rules vary across jurisdictions. The Legal and Regulatory Advisor documents the organisation's approach, reviews it with external legal counsel, and adapts it to the specific legal landscape of each deployment jurisdiction. The AISDP itself should not reference the privileged content of the register; it should reference the register's existence and its role in the PMM framework.
For organisations using procedural rather than automated approaches, a documented triage decision tree guides the assessor through the Article 3(49) criteria for each detected event. A structured triage form records the event, the evidence considered, the determination, and the rationale. A manual evidence snapshot procedure documents the steps for capturing the current model state, configuration, and log extracts to a designated archive location. This procedure should be practised quarterly by the engineering team to ensure it can be executed under time pressure. A pre-drafted regulatory notification template with placeholder fields, prepared by the Conformity Assessment Coordinator, completes the minimum viable serious incident response capability. The PMM monitoring infrastructure, even the minimal manual version, must generate the signals that trigger triage. Fully manual detection, where a person simply notices a problem without monitoring, is not reliably timely enough for the reporting obligations.