We use cookies to improve your experience and analyse site traffic.
Articles 49 and 71 require providers of high-risk AI systems to register in the publicly accessible EU database before placing the system on the market or putting it into service. This section covers database registration, the national competent authority landscape, inspection readiness, regulatory sandbox participation, multi-jurisdiction deployment, the penalty structure under Article 99, and system withdrawal procedures.
Article 71 establishes a publicly accessible EU database for registering high-risk AI systems listed in Annex III, managed by the European Commission.
Article 71 establishes a publicly accessible EU database for registering high-risk AI systems listed in Annex III, managed by the European Commission. The database must be user-friendly, easily navigable, and machine-readable. It serves three distinct purposes: providing public transparency about which high-risk AI systems are operating in the EU market, enabling market surveillance authorities to identify and monitor systems within their jurisdiction, and creating a public record that affected persons can consult to understand whether AI systems are involved in decisions affecting them.
The Conformity Assessment Coordinator completes registration before the system is placed on the market or put into service. The information required is specified in Annex VIII and divided into three sections corresponding to different registration scenarios.
Section A covers provider registration under Article 49(1) for high-risk systems under Annex III, excluding critical infrastructure systems under Annex III point 2. Providers must submit: the provider's name, address, and contact details; the details of any person submitting on the provider's behalf; the authorised representative's details where applicable; the AI system's trade name and any additional unambiguous reference allowing identification and traceability; a description of the intended purpose; the system's status (on the market, in service, no longer on the market or in service, or recalled); the type, number, and expiry date of any certificate issued by a notified body along with that body's identification; the member states in which the system has been deployed; the URL of any comparable national or EU database entry; the URL for additional information on system operation or instructions for use; a concise description of the means by which training, validation, and testing data was collected; and electronic instructions for use.
Each item demands careful preparation. The "description of the intended purpose" must be precise and consistent with the aisdp Module 1 wording. The "concise description of data collection means" must be factually accurate without disclosing commercially sensitive details. The Technical SME prepares the electronic instructions for use in a format suitable for digital publication.
Section B covers provider registration under Article 49(2) for non-high-risk determinations. Providers who have concluded under Article 6(3) that their Annex III system is not high-risk must still register. They submit identification details, the system's trade name, the intended purpose description, the conditions under Article 6(3) justifying the non-high-risk determination, a short summary of the grounds for this conclusion, and the system's status. This registration is particularly important because it creates a public record of the provider's self-assessment. If a market surveillance authority later disagrees with the classification, the registered justification becomes central evidence. The Article 6(3) justification must therefore be thorough and defensible.
Section C covers deployer registration under Article 49(3). Public authority deployers must register themselves and the system's use, providing the deployer's contact details, the submitter's details, the URL of the system's existing entry in the EU database linking to the provider's registration, a summary of the Fundamental Rights Impact Assessment findings under Article 27, and a summary of any Data Protection Impact Assessment under GDPR Article 35. The deployer registration creates a chain from the provider's system entry to each public authority's specific use of that system, enabling oversight of how high-risk systems are deployed across the public sector.
High-risk AI systems under Annex III points 1, 6, and 7, covering biometric identification for law enforcement, migration, asylum, and border control, are registered in a secure, non-public section of the EU database.
High-risk AI systems under Annex III points 1, 6, and 7, covering biometric identification for law enforcement, migration, asylum, and border control, are registered in a secure, non-public section of the EU database. Only the Commission and national authorities designated under Article 74(8) can access this section. The information submitted is a subset of the full Section A requirements. Critical infrastructure systems under Annex III point 2 are registered at national level, outside the EU database entirely, reflecting their sensitivity and the national security considerations involved.
Article 60 permits providers and prospective providers of high-risk AI systems listed in Annex III to test those systems in real-world conditions before placing them on the market, provided specific safeguards are met. This provision applies to organisations conducting pilot deployments, A/B testing with real users, staged rollouts, or any form of pre-market evaluation exposing the system to real-world inputs and real affected persons outside a controlled environment.
The threshold triggering Article 60 is exposure to real-world conditions with real subjects. Internal testing on historical data, synthetic data, or within a controlled test environment with consenting employees acting as test subjects does not trigger the obligation. The obligation arises when the system processes real inputs from, or produces outputs that affect, persons participating in the test under real or near-real operational conditions. Organisations should assess each pilot deployment against this threshold and document the determination.
The EU database registration is publicly accessible, meaning errors are visible to market surveillance authorities, deployers, affected persons, competitors, and the media.
The EU database registration is publicly accessible, meaning errors are visible to market surveillance authorities, deployers, affected persons, competitors, and the media. Quality assurance before submission is not optional.
A structured internal review should precede every submission. The Conformity Assessment Coordinator prepares the registration data by extracting required fields from the AISDP and formatting them according to the database's submission requirements. The Technical SME reviews the technical content, including the system description, data collection description, and intended purpose, for accuracy and consistency with the AISDP. Legal counsel reviews legal content, including provider identification, authorised representative details, and any non-high-risk justification, for accuracy and completeness. The AI Governance Lead provides final approval before submission.
Registration information must be consistent with the AISDP, the Declaration of Conformity, and the Instructions for Use. The "description of the intended purpose" in the registration must match the intended purpose described in AISDP Module 1. The "concise description of data collection means" must be factually consistent with the data governance documentation in AISDP Module 4. Any inconsistency creates a compliance vulnerability: a competent authority comparing the registration against the AISDP may treat discrepancies as evidence of inadequate quality management.
The Conformity Assessment Coordinator should maintain a mapping table tracing each registration field to its source in the AISDP, enabling rapid consistency verification whenever either document is updated. After submission, the organisation should verify the published entry within one week. Display formatting, character encoding, and field truncation can introduce discrepancies that must be corrected immediately.
The European AI Office, established within the European Commission, serves as the central coordination point for AI Act implementation across the EU.
The European AI Office, established within the European Commission, serves as the central coordination point for AI Act implementation across the EU. It coordinates the consistent application of the AI Act across member states, develops guidelines, templates, and codes of practice, oversees GPAI model compliance and enforcement, manages the scientific panel of independent experts, administers the EU database, and monitors implementation progress. For providers of high-risk systems, the AI Office's most immediately relevant outputs are the implementing acts, delegated acts, and guidance documents that clarify ambiguous requirements.
The AI Office also has direct enforcement powers over GPAI model providers under Article 88. Providers of GPAI models with systemic risk interact directly with the AI Office, bypassing national competent authorities. For organisations whose high-risk systems incorporate GPAI models, for example by fine-tuning a foundation model, this creates a dual regulatory relationship: the national competent authority oversees the high-risk system, and the AI Office oversees the underlying GPAI model.
Organisations should monitor the AI Office's publications systematically. Guidelines, codes of practice, and implementing guidance refine the AI Act's requirements. These documents are non-binding in themselves, though they carry significant interpretive weight. A market surveillance authority assessing compliance will reference the AI Office's guidance. Departing from published guidance without documented justification is a compliance risk. Organisations can also participate in the AI Office's consultation processes, contributing expertise and anticipating forthcoming requirements.
Market surveillance authorities have broad investigative powers under Article 74, including the power to request documentation, conduct on-site inspections, and require access to the system's logging infrastructure.
Market surveillance authorities have broad investigative powers under Article 74, including the power to request documentation, conduct on-site inspections, and require access to the system's logging infrastructure. Organisations should maintain an "inspection-ready" posture at all times. This means the AISDP and evidence pack are current and not waiting for the next scheduled review. Evidence is organised and accessible so that an inspector does not need to wait for someone to locate it. Monitoring dashboards are operational and displaying current data. The human oversight interface can be demonstrated on request. Key personnel, including the AI Governance Lead, Technical SME, and Legal and Regulatory Advisor, are aware of their roles during an inspection and can be made available at short notice.
The Conformity Assessment Coordinator creates and maintains a pre-configured "regulatory-inspector" IAM role. This role provides read-only access to the evidence repository containing all AISDP modules, assessment records, and non-conformity registers. It also grants access to monitoring dashboards showing current and historical compliance metrics, the logging infrastructure enabling log queries for specific time ranges and inference IDs, the model registry with model metadata and provenance records, and AISDP documentation in current and historical versions. Access to proprietary source code, commercial contracts, employee personal data, and other non-compliance-relevant information should be excluded. The Legal and Regulatory Advisor tests the role monthly by activating it and confirming all expected access works.
Article 99 establishes a graduated penalty framework with three tiers calibrated to the severity of the violation, creating meaningful deterrence backed by the investigative powers national competent authorities hold under Article 74.
Article 99 establishes a graduated penalty framework with three tiers calibrated to the severity of the violation, creating meaningful deterrence backed by the investigative powers national competent authorities hold under Article 74.
Tier 1 covers prohibited AI practices under Article 5, carrying fines up to EUR 35 million or 7 per cent of global annual turnover. This is the highest penalty tier in the EU regulatory landscape, exceeding GDPR's maximum of EUR 20 million or 4 per cent. It covers subliminal manipulation, exploitation of vulnerable persons, social scoring, and unauthorised real-time biometric identification.
Tier 2 covers high-risk system obligation breaches under Articles 8 to 15, 17, 25 to 27, and 43 to 49, carrying fines up to EUR 15 million or 3 per cent of global annual turnover. This tier is most directly relevant to AISDP preparation, covering failure to maintain adequate technical documentation, failure to conduct conformity assessment, failure to register in the EU database, failure to implement human oversight, failure to establish post-market monitoring, and failure to report serious incidents within required timelines.
Tier 3 covers incorrect, incomplete, or misleading information provided to notified bodies or competent authorities, carrying fines up to EUR 7.5 million or 1 per cent of global annual turnover. This includes inaccurate EU database registration data, misleading statements in the Declaration of Conformity, or incomplete responses to competent authority information requests.
In each tier, the higher of the two amounts (absolute figure or turnover percentage) applies, except for SMEs and start-ups, where Article 99(6) provides that the lower amount applies.
Article 57 requires each member state to establish at least one AI regulatory sandbox by August 2026, providing a controlled environment in which providers can develop, train, validate, and test AI systems under regulatory supervision before placing them on the market.
Article 57 requires each member state to establish at least one AI regulatory sandbox by August 2026, providing a controlled environment in which providers can develop, train, validate, and test AI systems under regulatory supervision before placing them on the market. Participation is voluntary but offers significant advantages for organisations developing novel or high-risk systems.
Sandbox participation provides direct regulatory feedback on the system's compliance approach before the conformity assessment, which reduces the risk of a failed assessment or post-market enforcement action. It creates a documented track record of regulatory cooperation, strengthening the organisation's credibility with market surveillance authorities. Sandbox participants benefit from a degree of regulatory flexibility: Article 57(8) permits the competent authority to agree on conditions that facilitate innovation. For systems in novel domains where the application of the AI Act's requirements is unclear, sandbox participation can help establish precedent.
The commitment is significant. Sandbox programmes typically run six to twelve months. The organisation must prepare an application demonstrating the system's intended purpose, risk profile, and development stage. Once admitted, regular reporting to the supervising authority is expected, including progress updates, test results, and any issues encountered.
Organisations should prioritise sandbox participation for their highest-risk or most novel systems, where regulatory uncertainty is greatest and the benefit of direct supervisory feedback is most valuable. Lower-risk systems with well-understood compliance pathways are better served by the standard internal conformity assessment process.
Organisations deploying high-risk AI systems across multiple member states face a coordination challenge that compounds with each additional jurisdiction.
Organisations deploying high-risk AI systems across multiple member states face a coordination challenge that compounds with each additional jurisdiction. The AI Act is a regulation and applies uniformly, but the implementation ecosystem of competent authorities, guidance documents, inspection practices, and enforcement priorities varies by member state.
The organisation should designate a single internal coordination point, typically the Conformity Assessment Coordinator or a dedicated regulatory affairs function. This role maintains a register of all relevant authorities across deployment jurisdictions, monitoring jurisdiction-specific guidance and procedural requirements, coordinating registration and reporting across multiple databases and authorities, ensuring consistency of information submitted to different authorities, and managing language requirements. The jurisdiction register should capture, for each deployment member state, the designated national competent authority, the market surveillance authority if different, the relevant data protection authority, any sector-specific regulators with overlapping jurisdiction, published guidance, preferred communication channels and language requirements, and any jurisdiction-specific registration requirements. The register should be reviewed quarterly.
Several practical challenges arise for multi-state deployments. National authorities may interpret ambiguous provisions differently. One authority may consider a specific use case within Annex III scope while another classifies it differently. One authority may interpret "sufficient level of AI literacy" under Article 4 to require formal certification while another accepts documented on-the-job training. Staggered authority maturity means engagement strategies must be calibrated per jurisdiction: proactive engagement with mature authorities, monitoring with developing authorities, and conservative compliance posture with silent authorities.
Voluntary withdrawal may occur for commercial reasons such as product discontinuation, for technical reasons where the system can no longer be maintained to the required standard, or for compliance reasons where the organisation determines the system cannot achieve or maintain conformity.
Voluntary withdrawal may occur for commercial reasons such as product discontinuation, for technical reasons where the system can no longer be maintained to the required standard, or for compliance reasons where the organisation determines the system cannot achieve or maintain conformity. Voluntary withdrawal requires updating the EU database registration status to "no longer on the market/in service," notifying all deployers with a timeline, providing transition guidance covering alternative systems, data export, and wind-down procedures, and retaining the complete AISDP, evidence pack, and version history for the ten-year retention period.
A competent authority may order withdrawal of a non-conforming system under Article 79. Mandated withdrawal imposes additional obligations: the provider must take corrective action or withdraw the system within the authority's timeframe, inform other member state authorities and the Commission if non-compliance affects cross-jurisdiction systems, and failure to comply is an aggravating factor for penalty determination. Article 79 also empowers authorities to order a recall, which requires the provider to retrieve the system from deployers who have already acquired it. Recall is more severe than withdrawal and is typically reserved for systems presenting a serious risk. The provider must notify all deployers, coordinate recall logistics, and report the outcome to the competent authority.
Withdrawal does not extinguish the provider's obligations. The ten-year documentation retention period continues. If come to light after withdrawal, for example where affected persons report harm that occurred while the system was in service, the provider must still report them under Article 73. Post-market monitoring obligations continue for the period during which the system was in service, as the provider may be required to analyse historical monitoring data in response to post-withdrawal investigations.
Conflicting guidance from different member states is a realistic risk during the early implementation period.
Conflicting guidance from different member states is a realistic risk during the early implementation period. National competent authorities may interpret ambiguous provisions differently, particularly where the Commission has not yet published implementing guidance.
The Legal and Regulatory Advisor should maintain a register of jurisdiction-specific guidance relevant to the organisation's AI systems. When a new guidance document is published by any member state authority in which the system is deployed, it should be assessed for consistency with guidance from other relevant authorities and with the AI Office's publications. Where a conflict is identified, the register records the specific provision in dispute, the conflicting interpretations, and the affected AISDP modules.
The default resolution is to adopt the more conservative interpretation, unless doing so would conflict with a third authority's guidance or create a technical impossibility. The rationale for the chosen interpretation is documented in the AISDP's compliance rationale section with references to the specific guidance documents considered. If the conflict is material, affecting system design, the human oversight model, or the scope of the intended purpose, the organisation should consider raising the issue with the AI Office, which has a mandate to coordinate consistent application across member states. An advisory opinion from the competent authority in the primary deployment jurisdiction should be sought, documenting both the request and the response as evidence.
Regardless of how any conflict is resolved, the organisation should document its interpretation, its reasoning, the guidance it considered, and the evidence supporting its position. If the conflict is later resolved by the AI Office, a court, or harmonised standards, the organisation should assess whether the resolution affects its compliance posture and update the AISDP accordingly. A well-documented position, even if it later proves to be on the wrong side of a resolved conflict, demonstrates good faith compliance effort.
Yes. Providers who conclude under Article 6(3) that their Annex III system is not high-risk must still register with their identification details, the non-high-risk justification, and the system's status. This creates a public record of the self-assessment that becomes central evidence if a market surveillance authority later disagrees with the classification.
Translation costs alone for a five-language deployment typically fall between EUR 10,000 and EUR 30,000 per system initially, with EUR 3,000 to EUR 10,000 annual maintenance. Additional costs include regulatory monitoring per jurisdiction, local legal counsel, incident response coverage across time zones and languages, and deployer support.
Adopt the more conservative interpretation unless doing so conflicts with a third authority's guidance or creates a technical impossibility. Document the rationale with references to specific guidance documents in the AISDP. For material conflicts affecting system design or oversight, consider raising the issue with the AI Office, which coordinates consistent application across member states.
A benchmark used during annual mock inspections where the team must produce each category of requested regulatory artefact within 30 minutes. If locating a document takes hours because only one person knows where it is stored, the rehearsal has identified a critical gap. Results are documented as evidence and gaps tracked in the non-conformity register.
Providers must register tests in the EU database using Annex IX information before commencing, obtain informed consent under Article 61, ensure outputs are reversible or disregardable, and update registration if testing is suspended or terminated.
Three tiers: EUR 35 million or 7% turnover for prohibited practices, EUR 15 million or 3% for high-risk obligation breaches, and EUR 7.5 million or 1% for misleading information. SMEs and start-ups pay the lower amount.
Maintain inspection-ready posture with current documentation, pre-configured regulatory access IAM roles, operational monitoring dashboards, and annual mock inspection rehearsals using the 30-minute drill benchmark.
Direct regulatory feedback before conformity assessment, documented cooperation track record, potential innovation-facilitating conditions under Article 57(8), and precedent-setting for novel domains with unclear requirements.
Ten-year documentation retention continues, serious incidents discovered post-withdrawal must still be reported under Article 73, and historical monitoring data may be required for investigations. Mandated withdrawal under Article 79 imposes additional corrective action obligations.
Before commencing real-world testing, the provider must register the test in the EU database in accordance with Article 71(4), using the information specified in Annex IX. This covers five categories: a Union-wide unique identification number assigned upon registration; provider and deployer contact details; a brief description of the system including its intended purpose; a summary of the main characteristics of the real-world testing plan; and information on any suspension or termination.
Article 61 requires informed consent from test subjects before participation. Consent must be freely given, specific, informed, and unambiguous. Test subjects must be told they are participating in a test, what the system does, what data will be collected, and their right to withdraw at any time without consequence. For systems involving automated decision-making, the consent must explain how outputs will be used and what safeguards are in place. Consent records must be retained for ten years as required by Article 18.
Article 60(4) imposes several operational safeguards. The test must not adversely affect health or safety. Outputs must be reversible or disregardable, meaning a human must be able to override or ignore any AI-generated decision. The test must not produce effects that would otherwise require completed conformity assessment. Test subjects must be able to contact the provider to raise concerns. Qualified personnel must supervise throughout.
Real-world testing typically occurs during the testing, validation, or pre-deployment phases. Results feed directly into the risk assessment, the fairness evaluation, and the post-market monitoring baseline. The AI System Assessor should review the testing outcomes as part of the conformity assessment evidence, particularly where testing revealed performance differences between controlled evaluation and real-world deployment conditions. The registration must be updated if testing is suspended or terminated, and the final test report should be incorporated into the AISDP evidence pack.
The AI Act requires registration information to be "kept up to date." This creates an ongoing obligation extending for the system's entire operational lifetime. Any change to the registered information, including changes to system status, intended purpose, deployment member states, or certification details, must be reflected in the database. A change management process should be established that triggers database updates whenever relevant changes occur.
The registration process itself is expected to be an online submission through the Commission's platform. The technical format for electronic instructions for use has not been fully standardised, and organisations should prepare these in commonly accessible digital formats (PDF, HTML) pending further guidance.
The national competent authority landscape as of early 2026 presents a fragmented picture that organisations deploying across multiple jurisdictions must navigate carefully. Frontrunner states include Spain, which established the Agencia Espanola de Supervision de la Inteligencia Artificial (AESIA) via Royal Decree 729/2023 with a national regulatory sandbox launched and published procedures available. Ireland operates a distributed model with 15 sector-specific competent authorities and 9 fundamental rights authorities, with national AI Office legislation progressing. The Netherlands coordinates under the Ministry of Economic Affairs with a regulatory sandbox proposal published.
Among progressing states, Germany designated the Bundesnetzagentur as primary market surveillance via the draft KI-MIG, with BaFin covering financial sector AI systems, but missed the August 2025 deadline due to disrupted legislative calendar. France distributes responsibilities across DGCCRF for consumer protection, CNIL for data protection, and the Defender of Rights for fundamental rights, with broader designation progressing through legislative channels.
As of late 2025, fourteen member states had not yet designated or established any competent authority. Hungary and Italy had not identified their fundamental rights protection authorities despite the November 2024 deadline. Twenty-one member states had not submitted national implementation legislation by the August 2025 deadline. This fragmentation means organisations cannot complete certain compliance steps until the relevant member state has designated its authorities. The prudent approach is to monitor the IAPP EU AI Act Regulatory Directory and similar trackers, establish contact with designated authorities as early as possible, and prepare documentation adaptable to varying national procedural requirements.
Annual inspection rehearsals are essential for identifying readiness gaps. The rehearsal should replicate the real experience as closely as possible. Mock inspectors, ideally an external party unfamiliar with the system's specifics, arrive with limited notice, request specific records using regulatory language (such as "please provide the records required under Annex IV point 2(b)"), ask probing questions about the system's design and operation, test the team's ability to explain the risk management process and fairness methodology, and request log exports for specific time ranges. A useful benchmark is the "30-minute drill": for each category of regulatory request, can the team produce the requested artefact within 30 minutes? If locating a document takes hours because it is stored in a personal file that only one team member knows about, the rehearsal has identified a critical gap. Rehearsal results are documented as Module 10 evidence and gaps tracked in the non-conformity register with remediation actions.
During a real inspection, the AI Governance Lead serves as primary point of contact, coordinating the organisation's response. A designated Inspection Coordinator manages logistics: scheduling interviews, retrieving documents, arranging system access, and maintaining a log of every document provided and every question asked. The organisation should provide everything within the lawful scope promptly and cooperatively. Obstructing or delaying an inspection carries penalties under Article 99(4). Where a request touches on commercially sensitive information beyond the regulatory scope, the Legal and Regulatory Advisor should engage with inspectors to agree on appropriate confidentiality protections.
Post-inspection, the authority may issue findings, recommendations, or corrective action requirements. Each finding is entered into the Non-Conformity Register, assigned to a responsible person, and remediated within the required timeline. Inspection findings may reveal systemic weaknesses affecting other AI systems in the organisation's portfolio, in which case the AI Governance Lead should assess whether broader remediation is needed.
Competent authorities may initiate enforcement proceedings in response to several triggers: proactive market surveillance, complaints from affected persons, deployers, or competitors, the provider's own serious incident notification under Article 73, cross-border referrals between authorities, or media and civil society reporting. The AISDP is central to the enforcement process, as the authority's first request will typically be for complete technical documentation. An AISDP that is incomplete, inconsistent with the deployed system, or unsupported by evidence is itself a non-compliance finding that may trigger the Tier 2 penalty.
Article 99(7) directs authorities to consider mitigating factors when determining penalty amounts, including the nature, gravity, and duration of infringement, whether corrective actions were taken promptly, the degree of cooperation with the authority, the organisational measures implemented, and whether the provider proactively disclosed the infringement. Aggravating factors include deliberate infringement, failure to take corrective action after identification, and previous infringement history. The quality of the compliance programme is itself a mitigating factor, meaning the AISDP is part of the evidence determining the consequence of any deficiency found.
The AI System Assessor integrates sandbox findings and supervisory feedback into the AISDP. Where the competent authority has reviewed and accepted specific aspects of the system's design or compliance approach, that acceptance is documented as supporting evidence. Sandbox exit reports, where the supervising authority provides a formal summary of outcomes, are valuable evidence artefacts for the AISDP compliance record.
Incident reporting under Article 73 requires pre-identified reporting channels for every deployment jurisdiction, pre-translated incident report templates where required, and a coordination procedure ensuring the same incident is reported consistently to all relevant authorities. Data sovereignty constraints may impose additional national provisions beyond the GDPR for personal data processed in certain member states.
A system that has undergone conformity assessment and bears the CE marking is accepted across the single market without additional national assessment. Article 23 confirms this by prohibiting member states from creating barriers to compliant systems. In practice, organisations should retain complete assessment records and be prepared to demonstrate compliance to any member state's authority. Third-country providers must appoint an authorised representative under Article 22 who has sufficient technical understanding to respond meaningfully to authority enquiries, not merely a legal appointment holding documentation.
Translation requirements scale with deployment jurisdictions. The Instructions for Use must be translated into the official language of each deployment member state under Article 13(3)(b)(ii). The Declaration of Conformity may require translation depending on the member state. Serious incident reports must be in the language accepted by the receiving authority. For a five-language deployment, initial translation costs typically fall between EUR 10,000 and EUR 30,000 per system, with annual maintenance costs of EUR 3,000 to EUR 10,000. AI compliance documentation contains specialised terminology from EU regulatory language, AI and ML terminology, and the application domain, meaning general-purpose machine translation is insufficient for compliance-critical documents. Organisations should use translators with demonstrable competence across all three domains, followed by technical review from a bilingual domain expert.
A standardised translation glossary mapping key terms across all deployment languages ensures consistency and reduces per-document costs. The glossary should cover AI Act-specific terms, technical metrics, and domain-specific terms relevant to the system's application area. Timeline planning should allow two to four weeks for initial professional translation and one to two weeks for updates.
The AISDP for a withdrawn system should include a withdrawal record documenting the date, reason, notifications sent, and the ongoing obligations that persist. The regulatory notification and registration update requirements form part of the broader end-of-life process, and organisations should coordinate withdrawal procedures with deployer transition planning and technical shutdown.
Communication protocols with competent authorities should be established before they are needed in a crisis. Proactive engagement involves introducing the organisation to relevant authorities early, providing a brief overview of the AI portfolio, compliance approach, and designated contact points. This is particularly valuable in member states where the competent authority is newly established and still developing operational procedures. Incident communication should pre-define the reporting channel, the format using the Commission's draft template adapted to national requirements, the signatory authority, and the follow-up schedule covering supplementary reports, investigation updates, and corrective action confirmations. Some member states may require periodic compliance reporting beyond the AI Act minimum. A register of jurisdiction-specific reporting obligations ensures routine reports are submitted on schedule.