We use cookies to improve your experience and analyse site traffic.
The EU AI Act (Regulation (EU) 2024/1689) establishes the world's first comprehensive regulatory framework for artificial intelligence, with full enforcement for high-risk systems beginning 2 August 2026. Every high-risk AI system placed on the EU market from that date must carry a completed conformity assessment, a signed Declaration of Conformity, CE marking, and registration in the EU database. The evidentiary backbone for all of these requirements is the AI System Documentation Package.
The EU AI Act addresses a fundamental challenge that distinguishes AI systems from conventional software: temporal instability.
The EU AI Act addresses a fundamental challenge that distinguishes AI systems from conventional software: temporal instability. A traditional application behaves tomorrow the way it behaves today unless someone deliberately changes it. An AI system is designed to improve through learning, and that learning introduces continuous change that conventional governance was never built to handle. Models are retrained on new data, feature distributions shift as the population served evolves, and fine-tuning adjusts behaviour in ways that may be subtle and difficult to document after the fact. The system that passed validation six months ago may no longer be the system running in production.
This temporal instability is what makes AI compliance fundamentally different from compliance for traditional software. A static application can be assessed once and maintained through periodic review. An AI system demands continuous evidence that it still behaves within documented boundaries, that its data inputs remain representative, that its performance across demographic subgroups has not degraded, and that its risk profile has not shifted. The documentation challenge is not to describe the system as it was built, but to maintain a living, traceable record of what the system is now and how it arrived there.
The AI Act (Regulation (EU) 2024/1689) is the first major regulatory framework to address this challenge at scale. It tells organisations what they must document but is largely silent on how. Articles 8 through 15 set out substantive requirements for high-risk systems. Annex IV specifies documentation contents. The gap between a regulatory requirement and an engineering workflow that satisfies it is where most compliance programmes stall, producing documentation that reads well on paper yet cannot withstand scrutiny because no traceable evidence supports its claims.
This gap is precisely where the Practitioners Implementation Guide operates. It translates every material obligation in the AI Act into concrete engineering practices, governance processes, and organisational structures. It addresses not only what each aisdp module must contain, but how to generate that content as a natural byproduct of the development lifecycle, how to verify it through internal conformity assessment, and how to maintain it through years of post-market operation until the system eventually reaches end-of-life. The AI Act imposes a documentation obligation that is fundamentally different from anything organisations have encountered before: one that must evolve continuously alongside the system it describes.
Article 8 serves as the umbrella provision binding all high-risk obligations together. It requires that high-risk AI systems comply with all Chapter 2 requirements as an integrated whole, taking into account the system's intended purpose and the generally acknowledged state of the art. Compliance with any individual article cannot be assessed in isolation. The conformity assessment must evaluate the system against the full set of requirements, and the AISDP's twelve-module structure reflects this integrative mandate. Each module addresses a specific requirement, but the modules are cross-referenced and interdependent.
The AI Act entered into force on 1 August 2024 with obligations phased across a staggered timeline.
The AI Act entered into force on 1 August 2024 with obligations phased across a staggered timeline. Understanding which provisions are already enforceable and which are pending is essential for planning aisdp preparation.
Already in force (February 2025). Prohibited practices under Article 5 took effect on 2 February 2025, banning subliminal manipulation, exploitation of vulnerable groups, social scoring, untargeted facial recognition scraping, emotion recognition in workplaces and educational institutions, criminal risk profiling, and real-time remote biometric identification in public spaces. Organisations operating systems that fall within any of these categories face immediate enforcement risk. The AISDP process cannot remediate a prohibited system; the only compliant action is cessation. AI literacy requirements under Article 4 also became enforceable on this date, requiring providers and deployers to ensure staff involved in AI operation have sufficient AI literacy. This obligation applies across all risk tiers and predates the high-risk framework.
Codes of practice for GPAI (since August 2024). Providers of gpai models were invited to participate in the Code of Practice drafting process under Article 56. The AI Office Code of Practice was finalised in early 2025 and provides a compliance pathway for GPAI providers. Organisations integrating GPAI models into high-risk systems should monitor the Code of Practice's evolution, as it establishes the expectations that national competent authorities will apply when assessing GPAI provider compliance.
The AI System Documentation Package is the operational expression of the Article 11 obligation.
The AI System Documentation Package is the operational expression of the Article 11 obligation. Providers of high-risk AI systems must draw up technical documentation in accordance with Annex IV before the system is placed on the market or put into service. A competent authority examiner who has never seen the system should be able to understand its design, behaviour, risk profile, and compliance posture from the AISDP alone. That is the self-standing test, and meeting it requires sustained effort across every compliance domain.
The AISDP is the document a national competent authority will open first. It is the file a notified body will scrutinise for technical rigour. It is the reference that internal governance, legal counsel, and engineering teams will consult throughout the system's operational life. A deficient AISDP exposes the organisation to fines reaching EUR 15 million or 3% of global annual turnover, enforcement action, and reputational harm that may exceed any financial penalty. A well-constructed AISDP forces the organisation to know its own AI systems with a depth and precision that many have never attempted.
A full AISDP comprises twelve modules, each addressing a specific compliance dimension:
| Module | Title | Key Article(s) |
|---|---|---|
| 1 | System Identity and Intended Purpose | Art. 11, Annex IV(1) |
| 2 |
The AISDP sits within an ecosystem of six compliance artefacts that together form the complete evidentiary chain from risk classification through to market placement and ongoing monitoring.
The AISDP sits within an ecosystem of six compliance artefacts that together form the complete evidentiary chain from risk classification through to market placement and ongoing monitoring.
The Classification Decision Record precedes the AISDP. It documents the system's classification within the AI Act's risk tiers and provides the rationale for that determination. The CDR addresses the four-tier risk classification under Article 6, the Annex III category determination, and the Article 6(3) exception assessment where applicable. AISDP Module 1 references the CDR as the authoritative source for risk classification. Without a current CDR, the AISDP lacks its foundational premise, and the entire compliance case is undermined.
The Evidence Pack substantiates every claim in the AISDP. Every material assertion, whether a performance metric, a fairness evaluation result, a security test finding, or a governance approval, must trace to a specific artefact in the evidence pack. The evidence register catalogues these artefacts and maps them to the AISDP modules they support. This traceability is what transforms the AISDP from a prose document into a defensible compliance record. Without it, the AISDP contains claims; with it, the AISDP contains evidence.
The Master Compliance Questionnaire provides systematic verification. It walks through every applicable AI Act requirement, prompting the assessor to confirm compliance and reference supporting evidence. The AISDP provides the substantive answers; the MCQ provides the structure that ensures nothing is missed. The MCQ is particularly valuable during internal review cycles, where it functions as a completeness check before the formal .
The AI Act creates overlapping obligations with four major EU legislative instruments that organisations must address in a unified compliance approach.
The AI Act creates overlapping obligations with four major EU legislative instruments that organisations must address in a unified compliance approach. Failure to recognise these overlaps leads to duplicated effort, inconsistent documentation, and gaps where no framework is clearly assigned responsibility.
GDPR (Regulation (EU) 2016/679) governs the processing of personal data used to train, validate, test, and operate AI systems. The AISDP's data governance module must demonstrate GDPR compliance alongside AI Act compliance. This encompasses lawful basis for training data processing under Article 6 GDPR, data subject rights in the context of AI-generated decisions, coordination between the Data Protection Impact Assessment (DPIA) required by GDPR Article 35 and the Fundamental Rights Impact Assessment (FRIA) required by AI Act Article 27, and data retention policies that satisfy both frameworks. Where a system processes special category data under Article 10(5) of the AI Act, the GDPR Article 9 conditions must also be documented. Data governance addresses GDPR alignment in detail, including the specific challenges of demonstrating lawful basis for historical training data and the GDPR status of stored embeddings in RAG architectures.
NIS2 Directive (Directive (EU) 2022/2555) imposes cybersecurity risk management requirements on essential and important entities. Organisations subject to NIS2 must integrate AI system cybersecurity controls with their broader NIS2 risk management measures. The overlap is particularly significant for incident reporting: NIS2 requires notification within 24 hours, while the AI Act's Article 73 serious incident reporting operates on a different timeline. Organisations subject to both must establish coordinated incident response procedures that satisfy both frameworks without conflict. Multi-regime incident reporting coordination is essential to avoid contradictory obligations.
The Practitioners Implementation Guide examines twenty-one interconnected domains across the full lifecycle of a high-risk AI system.
The Practitioners Implementation Guide examines twenty-one interconnected domains across the full lifecycle of a high-risk AI system. Each domain addresses a distinct dimension of AISDP preparation; together they cover everything from initial risk classification through to archival of the final documentation package.
Foundation domains. Risk assessment establishes the four-tier classification, the FRIA, and the iterative risk management lifecycle. It covers five risk identification methods: FMEA, stakeholder consultation, regulatory gap analysis, adversarial red-teaming, and horizon scanning. Risk scoring and calibration, residual risk acceptability thresholds, and the particular challenges of assessing systems built on GPAI models are addressed, including the Article 25(3) information request framework. Model selection addresses compliance consequences of model choice, including origin risk, the fine-tuning provider boundary under Article 25, copyright and intellectual property exposure, geopolitical considerations, and provider data collection practices.
Engineering domains. Data governance covers Article 10 requirements including dataset documentation, completeness assessment, fairness and bias evaluation at both pre-training and post-training stages, data lineage, special category data under Article 10(5), and GDPR alignment. It extends to embedding model governance and knowledge base management for RAG architectures. System architecture structures documentation around an eight-layer reference model covering data ingestion, feature engineering, model inference, post-processing, explainability, human oversight interface, logging and audit, and monitoring. underpins traceability across the AISDP, including the composite version concept that captures every artefact type deployed at a given point in time. describe automation that produces compliance evidence as a development byproduct. extends CI/CD with five compliance gates: classification and scope, risk and fundamental rights, fairness and bias, documentation currency, and deployment authorisation. addresses Article 15 resilience alongside NIS2, CRA, and DORA obligations, including the OWASP LLM Top 10 and thirteen AI-specific threat categories.
The AI Act assigns obligations based on role, with each actor in the AI value chain bearing distinct responsibilities.
The AI Act assigns obligations based on role, with each actor in the AI value chain bearing distinct responsibilities. Understanding which role applies to an organisation is the first step in determining the scope of its compliance obligations.
Providers are organisations that develop or place AI systems on the market under their own name or trademark. They bear the heaviest burden: conformity assessment, AISDP preparation, CE marking, Declaration of Conformity, EU database registration, quality management system establishment, and post-market monitoring. The provider is responsible for the entire technical documentation chain and for maintaining it for ten years after market placement. Where an organisation fine-tunes a GPAI model for a high-risk use case, it becomes a provider under Article 25(1)(b), triggering the full set of provider obligations regardless of whether it developed the underlying model.
deployers are organisations that use AI systems within the EU under their authority. Their obligations are lighter but still consequential: conducting fundamental rights impact assessments under Article 27, ensuring human oversight by competent persons under Article 14, monitoring system performance in accordance with the provider's guidance, reporting serious incidents to both the provider and market surveillance authorities, and maintaining logs for at least six months under Article 26(6). Public authority deployers face an additional obligation to register in the EU database under Article 49(3). addresses these obligations comprehensively, including the complete deployer compliance record structure covering eight modules.
The AI Act establishes a three-tier penalty structure that became enforceable from August 2025 for prohibited practices and applies fully from August 2026 for high-risk non-compliance.
The AI Act establishes a three-tier penalty structure that became enforceable from August 2025 for prohibited practices and applies fully from August 2026 for high-risk non-compliance. The penalty framework is designed to be proportionate yet deterrent, with maximum fines that rival those under GDPR.
Tier 1: Prohibited practices. Fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher, for systems violating Article 5 prohibitions. This is the highest penalty tier in the AI Act and exceeds GDPR's maximum of EUR 20 million or 4% of turnover. The prohibited categories include subliminal manipulation, exploitation of vulnerable groups, social scoring by public authorities, untargeted facial recognition scraping, emotion recognition in workplaces and educational institutions, criminal risk profiling based solely on profiling, and real-time remote biometric identification in publicly accessible spaces. These penalties have been enforceable since August 2025.
Tier 2: High-risk non-compliance. Fines of up to EUR 15 million or 3% of global annual turnover for failure to comply with the requirements of Articles 6 through 51. This covers deficient conformity assessments, missing or inadequate Declarations of Conformity, incomplete AISPDs, failure to establish a quality management system under Article 17, and failure to implement post-market monitoring under Article 72. For deployers, the same maximum applies to failures in human oversight, FRIA completion, incident reporting, and cooperation with competent authorities.
Fines of up to EUR 7.5 million or 1% of global annual turnover for providing incorrect, incomplete, or misleading information to competent authorities or notified bodies. This tier catches organisations that misrepresent their compliance status, provide incomplete documentation during inspections, or fail to maintain accurate records. It applies to both providers and deployers.
Organisations should begin AISDP preparation immediately if they have not already started.
Organisations should begin AISDP preparation immediately if they have not already started. The 20 to 28 week preparation timeline for a medium-complexity system means organisations need to be well under way by early 2026 to meet the August deadline. Brownfield compliance for systems already in production is typically more complex than greenfield preparation, as it requires retrospective documentation of decisions that were made without compliance in mind.
Prerequisites. A discovery exercise identifying all AI systems in the portfolio should be complete, with a Classification Decision Record for each system documenting its position within the AI Act's risk tiers. An evidence pack structure should be established, providing the organisational infrastructure for collecting, storing, and cataloguing the artefacts that substantiate AISDP claims. Governance roles should be assigned, including at minimum an AI Governance Lead, an AI System Assessor, and a Technical SME. Smaller organisations may combine roles, but the responsibilities must be explicitly allocated. A Quality Management System meeting Article 17 requirements should be in place at a foundational level, covering policies, procedures, responsibilities, and document control. The Technical SME and AI System Assessor must have direct access to the system's source code, training infrastructure, model artefacts, data stores, deployment configuration, and monitoring outputs.
Reading guide by role. The guide serves multiple audiences with distinct needs. AI Governance Leads should prioritise risk assessment, conformity assessment, and regulatory landscape sections, with particular attention to residual risk acceptability thresholds, non-conformity management, and Declaration of Conformity liability. Technical SMEs should focus on model selection, data governance, architecture, version control, and CI/CD, particularly the governance pipeline's five compliance gates and the composite version concept. Legal and Regulatory Advisors should focus on regulatory context, FRIA methodology, conformity assessment, penalty framework, and end-of-life regulatory basis. Conformity Assessment Coordinators should prioritise the Annex VI walkthrough, non-conformity register management, multi-system coordination, and registration workflow. DPO Liaisons should focus on GDPR alignment, special category data, and DPIA/FRIA coordination. Executive Leadership needs the strategic overview, penalty exposure, and cost model.
Yes, and it is strongly recommended. The 20 to 28 week preparation timeline for a medium-complexity system means organisations should begin no later than early 2026. Earlier preparation allows for iteration and addresses the reality that harmonised standards and national competent authorities are still maturing.
The simplification under Article 11(1) relates to format and presentation, not to the substantive obligations of Articles 8 through 15. SMEs must still demonstrate compliance with every requirement; the simplified form, once published by the Commission, will streamline how that evidence is presented.
Systems already on the market must comply by the deadline. Brownfield compliance requires retrospective documentation of existing systems, which is typically more complex than greenfield preparation. The delivery process section addresses the brownfield compliance process.
Organisations should establish a unified compliance mapping that identifies where a single control or evidence artefact satisfies overlapping requirements. The AISDP data governance module must demonstrate GDPR compliance; the cybersecurity module must align with NIS2 and, where applicable, the Cyber Resilience Act and DORA.
Yes. The Declaration under Article 47 is a formal legal statement that the system meets the AI Act's requirements. Signing it carries liability and insurance implications that must be understood before signature.
Up to EUR 35 million or 7% of global turnover for prohibited practices; EUR 15 million or 3% for high-risk non-compliance; EUR 7.5 million or 1% for providing incorrect information.
The AI Act creates overlapping obligations with GDPR for data governance, NIS2 for cybersecurity, the Cyber Resilience Act for software products, and DORA for financial sector organisations.
Twelve modules covering system identity, development process, architecture, data governance, testing, risk management, human oversight, transparency, cybersecurity, record-keeping, FRIA, and post-market monitoring.
Providers who develop or place AI systems on the EU market, deployers who use them within the EU, and importers and authorised representatives. Obligations vary by role and risk tier.
GPAI obligations (August 2025). Providers of general-purpose AI models must comply with transparency obligations, copyright policy requirements, and, for models with systemic risk, additional safety obligations including adversarial testing and serious incident reporting under Articles 51 through 56. Organisations integrating GPAI models into high-risk systems must account for the gpai provider's compliance status in their own risk assessment and model selection rationale. The penalty framework for prohibited AI practices also became enforceable on this date, with fines of up to EUR 35 million or 7% of global annual turnover.
Full high-risk framework (August 2026). Every high-risk AI system placed on the market must have a completed conformity assessment, a signed Declaration of Conformity, CE marking where applicable, and registration in the EU database. This is the critical deadline for AISDP preparation. From this date, the complete set of obligations under Articles 6 through 51 becomes enforceable.
Extended deadline (August 2027). AI systems constituting safety components of products already covered by Annex I harmonisation legislation have an additional year. These products already have conformity assessment procedures under their respective directives; the AI Act requirements layer on top.
| Date | Provision | Implication |
|---|---|---|
| 1 Aug 2024 | AI Act enters into force | Transition periods begin |
| 2 Feb 2025 | Prohibited practices (Art. 5) + AI literacy (Art. 4) | Immediate enforcement |
| 2 Aug 2025 | GPAI obligations (Arts. 51-56) + penalty framework | Transparency and safety for foundation models |
| 2 Aug 2026 | Full high-risk framework (Arts. 6-51) | Conformity assessment, DoC, CE marking, registration |
| 2 Aug 2027 | Annex I safety component systems | Extended deadline for existing product legislation |
The European Commission's Digital Omnibus proposal, currently in legislative process, may adjust certain timelines and requirements. Organisations should monitor its progress but should not rely on prospective changes as a reason to defer preparation. Member states were required to designate their national competent authorities by August 2025. As of early 2026, the maturity of these authorities varies considerably across the EU. Organisations should engage with their relevant authorities early, even where those authorities are still establishing their procedures. Regulator interaction provides a detailed landscape assessment.
| Development Process and Methodology |
| Annex IV(2) |
| 3 | Architecture and Design Specifications | Annex IV(2)(b-e) |
| 4 | Data Governance and Dataset Documentation | Art. 10, Annex IV(2)(d) |
| 5 | Testing and Validation Results | Art. 9, Annex IV(3) |
| 6 | Risk Management System | Art. 9, Annex IV(2)(b) |
| 7 | Human Oversight Measures | Art. 14, Annex IV(3) |
| 8 | Transparency and User Information | Art. 13, Annex IV(3) |
| 9 | Robustness and Cybersecurity | Art. 15, Annex IV(3) |
| 10 | Record-Keeping and Logging | Art. 12, Annex IV(3) |
| 11 | Fundamental Rights Impact Assessment | Art. 27 |
| 12 | Post-Market Monitoring and Change History | Arts. 72-73 |
Every material claim in the AISDP requires a supporting artefact: a test result, a design document, a signed attestation, a configuration record. The approach described throughout this guide produces that evidence as a natural byproduct of the engineering workflow rather than as a retrospective documentation exercise. The relationship between the guide's domain sections and the twelve AISDP modules is not always one-to-one; several domains contribute evidence to multiple modules, and certain modules draw on multiple domains.
The scope varies by risk tier. High-risk systems require the full twelve-module package. Limited-risk systems require a Standard AISDP covering a subset of modules focused on transparency and classification rationale. Minimal-risk systems need only a Baseline AISDP documenting the classification decision. Risk assessment addresses tier classification in detail.
The AISDP must be retained for ten years after the system is placed on the market under Article 18. The complete version history, including superseded versions and their evidence packs, must be preserved. Organisations should plan document management infrastructure accordingly. Article 3(23) defines substantial modifications that may trigger re-assessment, meaning the AISDP must be version-controlled with each revision creating a new version and a documented change log.
The Declaration of Conformity is the legal output of the conformity assessment process. It is a formal statement that the system conforms to the Act's requirements. A Declaration without a supporting AISDP is legally indefensible. Signing the Declaration carries liability and insurance implications that must be understood before the signature is applied, as it is a binding legal commitment with material consequences.
The Quality Management System, required by Article 17, describes the organisation's general AI compliance processes. The QMS defines the policies, procedures, responsibilities, and document control that govern all AI compliance activity across the organisation. The AISDP applies those processes to a specific system. Post-market monitoring feeds findings back into the AISDP, ensuring documentation remains current throughout the system's operational life. The Instructions for Use, required by Article 13, draw their content from the AISDP and provide deployers with the information they need to operate the system lawfully.
Cyber Resilience Act (Regulation (EU) 2024/2847) applies to AI systems delivered as software products or embedded in hardware. Article 15(5) of the AI Act provides a deemed compliance mechanism: high-risk AI systems that comply with the essential cybersecurity requirements of the CRA are presumed to comply with the Article 15 cybersecurity requirements. This pathway requires careful scope determination, as not all AI systems fall within the CRA's definition of products with digital elements. Cybersecurity explains the deemed compliance pathway and its limitations.
DORA (Regulation (EU) 2022/2554) creates additional obligations for financial sector organisations deploying high-risk AI systems. The intersection with the AI Act requires particular attention to third-party risk management for model providers, ICT incident classification and reporting under DORA's framework alongside AI Act incident reporting, and the DORA third-party risk overlay on model selection decisions. Financial organisations must map their AI model providers against DORA's critical third-party provider framework.
Organisations subject to multiple frameworks should establish a unified compliance mapping identifying where a single control satisfies overlapping requirements, reducing duplication and ensuring consistency. Cybersecurity provides consolidated cross-regulatory mapping tables across all four regimes, including dual and multi-regime incident reporting coordination with timeline comparisons and integrated response procedures.
Assessment and certification. Conformity assessment walks through the Annex VI internal assessment, including documentation standards, assessor independence, non-conformity management, and multi-system coordination. Certification outputs address harmonised standards, CE marking, assessment failure pathways, and Declaration of Conformity liability and insurance implications. Regulator interaction covers EU database registration, NCA engagement, regulatory sandbox participation, inspection readiness, and the penalty structure.
Operations. Post-market monitoring addresses the Article 72 PMM plan, incident reporting under Article 73, deployer monitoring responsibilities, and the complete system end-of-life and decommissioning process including withdrawal, recall, data lifecycle closure, and archival. Operational oversight describes the six-level oversight pyramid from engineering monitoring to board governance, break-glass procedures, and AI literacy requirements.
Advanced domains. GPAI integration addresses building high-risk systems on foundation models, including compensating controls for training data opacity and behavioural unpredictability. RAG compliance covers knowledge base and retrieval pipeline governance, grounding verification, and indirect prompt injection defence. Agentic AI addresses bounded autonomy, tool-use governance, and the three-tier approval gate architecture. Multi-model governance covers composite system documentation and stage-by-stage fairness analysis.
Delivery and cost. The seven-phase delivery process provides a 20 to 28 week implementation workflow with role assignments, deliverables, and dependencies. Compliance costs quantifies first-year and ongoing resource requirements across five cost categories and three organisation sizes.
importers verify that providers outside the EU have completed conformity assessment and that documentation accompanies the system. They must confirm CE marking, Declaration of Conformity, and Instructions for Use are present before making the system available on the EU market. authorised representatives act on behalf of non-EU providers, maintaining documentation and cooperating with authorities. Deployers should identify which entity in the supply chain is the provider for compliance purposes and direct information requests accordingly.
Three circumstances escalate a deployer to provider status under Article 25(1): placing the system on the market under the deployer's own name or trademark, making a substantial modification to the system, or using the system for a purpose outside the provider's documented intended purpose where the new purpose makes the system high-risk. The consequences of unrecognised escalation are significant, as the organisation would be operating with provider obligations but only deployer-level documentation.
Article 11(1) permits SMEs, including start-ups, to provide documentation in a simplified manner. The Commission is empowered to establish a simplified form, though as of early 2026 it has not been published. The simplification relates to format and presentation; the substantive obligations of Articles 8 through 15 apply equally. SMEs should not delay preparation on the assumption that the simplified form will substantially reduce their obligations.
For SMEs and start-ups, Article 99 specifies that penalties shall be applied proportionately to the organisation's size, economic viability, and the nature and gravity of the infringement. The proportionality requirement reduces the quantum of penalties but does not eliminate enforcement risk.
Beyond financial penalties, enforcement action includes market withdrawal orders requiring the system to be removed from the EU market, system recall orders where the system has already been distributed, mandatory corrective action plans, and reputational harm that may exceed any monetary penalty. A well-constructed AISDP is the primary defence against all three penalty tiers: it demonstrates compliance with high-risk requirements, supports the Declaration of Conformity, and provides the evidence that authorities examine first during inspections.
National competent authorities are at varying stages of readiness across member states. Harmonised standards are still being developed by CEN and CENELEC. None of this uncertainty excuses delay; the August 2026 deadline is a regulatory fact. Regulator interaction provides a detailed assessment of NCA maturity, engagement strategies, and inspection readiness requirements.
The living document obligation. Article 18 requires technical documentation to be kept up to date. Article 72 requires post-market monitoring that feeds findings back into documentation. In practice, the AISDP must be version-controlled, with each revision creating a new version and documented change log. Changes to the system trigger corresponding updates; a model retrained on new data requires updates to Modules 3, 4, and 5 at minimum. Post-market monitoring findings are reflected in the risk register (Module 6) and the change history (Module 12). Quarterly reviews are standard for high-risk systems. The ten-year retention obligation under Article 18 applies from the date the system is placed on the market, covering the complete version history including all superseded versions and their evidence packs.
The organisations that comply most effectively design their engineering workflows to produce compliance evidence as a natural byproduct of development, embed security from the outset, build internal assessment rigour capable of withstanding regulatory scrutiny, and cultivate an oversight culture where every person involved has the literacy to recognise problems and the confidence to raise them. The delivery process provides the phased implementation workflow. Compliance costs quantifies the investment required across staffing, tooling, infrastructure, external advisory, and ongoing operational categories.