We use cookies to improve your experience and analyse site traffic.
The AI Act establishes a graduated penalty framework under Article 99 with three tiers reaching EUR 35 million or 7 per cent of turnover. Competent authorities have investigative powers under Article 74, and the AISDP is central to the enforcement process as the first document requested.
The AI Act establishes a graduated penalty framework under Article 99 with three tiers calibrated to the severity of the violation.
The AI Act establishes a graduated penalty framework under Article 99 with three tiers calibrated to the severity of the violation. Tier 1, the highest penalty tier in the EU regulatory landscape exceeding GDPR's maximum, covers prohibited AI practices under Article 5 including subliminal manipulation, exploitation of vulnerable persons, social scoring, and unauthorised real-time biometric identification. The maximum fine is EUR 35 million or 7 per cent of global annual turnover.
Tier 2 covers high-risk system obligation breaches under Articles 8 through 15, Article 17, and Articles 25 through 49. This is the tier most directly relevant to aisdp preparation: failure to maintain adequate technical documentation, failure to conduct conformity assessment, failure to register in the EU database, failure to implement human oversight, failure to establish post-market monitoring, and failure to report serious incidents within required timelines. The maximum fine is EUR 15 million or 3 per cent of global annual turnover.
Tier 3 covers providing incorrect, incomplete, or misleading information to notified bodies or competent authorities under Article 99(4), including inaccurate EU database registration data, misleading statements in the Declaration of Conformity, or incomplete responses to competent authority information requests. The maximum fine is EUR 7,500,000 or 1 per cent of global annual turnover. In each tier, the higher of the two amounts applies, except for SMEs and start-ups where Article 99(6) provides that the lower amount applies. Penalties are not theoretical; the framework is designed to create meaningful deterrence, and national competent authorities have the investigative powers under Article 74 to detect and pursue non-compliance.
Competent authorities may initiate enforcement proceedings through several pathways.
Competent authorities may initiate enforcement proceedings through several pathways. Proactive market surveillance occurs when the authority systematically reviews AI systems in its jurisdiction, examining both the technical documentation and the deployed system's actual behaviour. Reactive investigation follows a complaint from an affected person, a deployer, or a competitor, and may expand from the specific complaint into a broader review of the system's compliance posture. Serious incident notification occurs when the provider's own Article 73 report prompts the authority to investigate the system more broadly, as the circumstances of a serious incident often reveal systemic compliance weaknesses. Cross-border referral occurs when an authority in another member state identifies a concern about a system that is also deployed in the referring authority's jurisdiction. Media or civil society reporting may prompt an investigation when public attention focuses on an AI system's behaviour, particularly where the reporting alleges fundamental rights impacts.
The AISDP is central to the enforcement process. The competent authority's first request will typically be for the complete technical documentation. An AISDP that is incomplete, inconsistent with the deployed system, or unsupported by evidence is itself a non-compliance finding and may trigger the Tier 2 penalty. Article 74 provides competent authorities with the investigative powers to request full technical documentation, conduct evaluations, and require corrective actions. Competent authorities have the power under Article 74 to detect and pursue non-compliance, making the penalty framework a meaningful deterrent rather than a theoretical risk.
Article 99(7) directs competent authorities to consider several factors when determining the penalty amount.
Article 99(7) directs competent authorities to consider several factors when determining the penalty amount. Mitigating factors include the nature, gravity, and duration of the infringement; whether corrective actions were taken promptly and effectively; the degree of cooperation with the competent authority; the degree of responsibility considering technical and organisational measures implemented; and the extent to which the provider proactively brought the infringement to the authority's attention.
Aggravating factors include the deliberate nature of the infringement, failure to take corrective action after the authority identified the issue, the provider's history of previous infringements, and the number of affected persons and the severity of the harm they suffered. The practical implication for AISDP preparation is that the quality of the compliance programme is itself a mitigating factor. An organisation that can demonstrate a thorough AISDP, a functioning PMM system, a responsive incident reporting capability, and a cooperative posture toward the competent authority will face materially lower penalty exposure than one that cannot. The AISDP is more than the document the authority reviews; it is part of the evidence that determines the consequence of any deficiency the authority finds.
In the early implementation period, conflicting guidance from different member states is a real risk.
In the early implementation period, conflicting guidance from different member states is a real risk. The AI Act is a regulation and applies uniformly, but national competent authorities may interpret ambiguous provisions differently, particularly where the Commission has not yet published implementing guidance.
The Legal and Regulatory Advisor should maintain a structured register of jurisdiction-specific guidance relevant to the organisation's AI systems, covering all member states in which the system is deployed or planned for deployment. When a new guidance document is published by any relevant member state authority, it should be assessed for consistency with guidance from other authorities and with the AI Office's publications. Where a conflict is identified, the register should record the specific provision in dispute, the conflicting interpretations from each authority, the affected AISDP modules, and the organisation's assessment of which interpretation to follow with supporting rationale.
The organisation should adopt the more conservative interpretation unless doing so would conflict with a third authority's guidance or create a technical impossibility. The Legal and Regulatory Advisor documents the rationale for the chosen interpretation in the AISDP's compliance rationale section, with references to the specific guidance documents considered. If the conflict is material, affecting the system's design, human oversight model, or intended purpose scope, the organisation should consider raising the issue with the AI Office, which has a mandate to coordinate consistent application of the Act across member states. An advisory opinion from the competent authority in the primary deployment jurisdiction provides additional protection; the request and response are documented as compliance evidence.
The EU database registration is publicly accessible, and errors in registration information are visible to market surveillance authorities, deployers, affected persons, competitors, and the media.
The EU database registration is publicly accessible, and errors in registration information are visible to market surveillance authorities, deployers, affected persons, competitors, and the media. Quality assurance before submission is essential.
Registration data quality assurance involves a structured four-step internal review process before any information is submitted to the EU database. The Conformity Assessment Coordinator prepares the registration data by extracting required fields from the AISDP and formatting them to the database's submission requirements. The Technical SME reviews technical content for accuracy and consistency with the AISDP, covering system description, data collection description, and intended purpose. Legal counsel reviews legal content for accuracy and completeness, covering provider identification, authorised representative details, and any non-high-risk justification. The AI Governance Lead approves the final submission.
Registration information must be consistent with the AISDP, the Declaration of Conformity, and the Instructions for Use. The intended purpose description in the registration must match AISDP Module 1. The data collection description must be factually consistent with Module 4. Any inconsistency creates a compliance vulnerability: the competent authority may compare the two and treat discrepancies as evidence of inadequate quality management. The Conformity Assessment Coordinator maintains a mapping table tracing each registration field to its AISDP source, enabling rapid consistency verification whenever either document is updated. After submission, the Conformity Assessment Coordinator should review the published entry in the public database within one week and correct any errors immediately. Display formatting, character encoding, and field truncation can introduce discrepancies between the submitted information and the displayed entry. Post-submission verification is a simple control that prevents publicly visible errors from undermining the organisation's compliance posture.
When an AI system is withdrawn from the market, whether voluntarily or by mandate, several procedural requirements apply.
When an AI system is withdrawn from the market, whether voluntarily or by mandate, several procedural requirements apply. Voluntary withdrawal may be driven by commercial reasons such as product discontinuation or replacement by a successor system, technical reasons where the system can no longer be maintained to the required standard or the underlying technology stack has been superseded, or compliance reasons where the organisation determines that the system cannot achieve or maintain conformity with the AI Act. Voluntary withdrawal requires the provider to update the EU database registration status to indicate the system is no longer on the market or in service, notify all deployers of the withdrawal with a clear timeline and transition guidance covering alternative systems, data export procedures, and wind-down arrangements, and retain the complete AISDP, evidence pack, and version history for the full ten-year retention period from the date the system was originally placed on the market.
A competent authority may order withdrawal of a non-conforming system under Article 79. Mandated withdrawal imposes additional obligations beyond those for voluntary withdrawal: the provider must take corrective action or withdraw the system within a timeframe set by the authority, inform other member state authorities and the Commission if the non-compliance affects systems deployed across jurisdictions, and failure to comply with a withdrawal order is an aggravating factor for penalty determination under Article 99.
Article 79 also empowers authorities to order a recall, which requires the provider to retrieve the system from deployers who have already acquired it. Recall is a more severe action than withdrawal and is typically reserved for systems that present a serious risk to health, safety, or fundamental rights. The provider must notify all affected deployers, coordinate the recall logistics including system disabling or return procedures, and report the recall outcome to the competent authority.
No. Article 99(6) provides that for SMEs and start-ups, the lower of the two amounts (absolute figure or turnover percentage) applies, rather than the higher.
Adopt the more conservative interpretation, document the rationale with references to specific guidance documents, and consider raising material conflicts with the AI Office for coordination.
No. The ten-year documentation retention continues, serious incidents discovered after withdrawal must be reported, and PMM obligations persist for the operational period.
Regardless of how the conflict is resolved, the organisation documents its interpretation, its reasoning, the guidance it considered, and the evidence supporting its position. If the conflict is later resolved by the AI Office, a court, or harmonised standards, the organisation assesses whether the resolution affects its compliance posture and updates the AISDP accordingly. A well-documented position, even if it later proves to be on the wrong side of a resolved conflict, demonstrates good faith compliance effort and serves as a mitigating factor under Article 99(7).
Withdrawal does not extinguish the provider's obligations. The ten-year documentation retention period under Article 18 continues from the date the system was placed on the market, not from the date of withdrawal. Serious incidents that come to light after withdrawal, where affected persons report harm that occurred while the system was in service, must still be reported under Article 73 within the applicable timelines. Post-market monitoring obligations continue for the period during which the system was in service, as the provider may be required to analyse historical monitoring data in response to post-withdrawal investigations or complaints from competent authorities.
The AISDP for a withdrawn system should include a withdrawal record documenting the date, the reason for withdrawal, all deployer notifications sent with delivery confirmations, the transition arrangements provided, and the ongoing obligations that persist after withdrawal. The regulatory notification and registration update requirements described here form part of the broader end-of-life process, and organisations should execute the withdrawal procedures as one workstream within the end-of-life plan, ensuring that regulatory notifications are coordinated with the deployer transition and technical shutdown.