We use cookies to improve your experience and analyse site traffic.
The thirteen domains of EU AI Act compliance are deeply interdependent. A deficiency in one propagates through others, often surfacing as a regulatory enforcement event far from the original gap. This reference provides the maturity model, readiness assessment, and resource planning framework that organisations need to sequence their compliance programme effectively.
The thirteen domains of AI Act compliance are deeply interdependent, and a deficiency in one propagates through others, often surfacing as a compliance failure far from the original gap.
The thirteen domains of AI Act compliance are deeply interdependent, and a deficiency in one propagates through others, often surfacing as a compliance failure far from the original gap. A weak data governance framework undermines fairness testing in the CI/CD pipeline. The model validation gates cannot catch bias that originates in unrepresentative training data. The post-market monitoring system then detects fairness drift only after deployment, triggering the serious incident reporting process.
This arrives at the competent authority as evidence of inadequate compliance. The root cause was a data governance gap; the consequence was a regulatory enforcement event. Organisations that treat each domain as a standalone checklist miss these propagation chains. Risk assessment under Article 9 sits at the apex: an underestimated risk produces proportionately weak controls everywhere downstream.
Six domain dependencies are most consequential. Risk assessment calibrates thresholds, controls, and monitoring parameters for all other domains. Data governance under Article 10 feeds model performance, fairness evaluation, and CI/CD gates. Version control enables the traceability that conformity assessment requires. The CI/CD pipeline generates the evidence pack. Post-market monitoring under Article 72 closes the feedback loop. Operational oversight multiplies every technical control through culture and escalation. Risk Management for High-Risk AI covers the risk assessment foundation.
Organisations approaching AISDP preparation benefit from understanding where they stand on a progression from minimal awareness to embedded compliance.
Organisations approaching AISDP preparation benefit from understanding where they stand on a progression from minimal awareness to embedded compliance. The maturity model defines five levels, each with characteristics and immediate priorities.
Level 1 (Awareness): the organisation is aware of the EU AI Act, has identified its AI systems, but has no AISDP, no formal risk assessment, and no governance roles. Priorities are system discovery and classification, governance role appointment, and risk assessment for the highest-risk systems.
Level 2 (Foundation): systems are classified, governance roles are assigned, and AISDP preparation has begun for the highest-risk systems. Basic version control and CI/CD exist for software but not for data or models. Priorities are data version control, a model registry integrated with CI/CD, model validation gates including fairness and robustness, and a data governance framework.
Level 3 (Structured): AISDPs are under preparation for all high-risk systems. Version control covers code, data, and models. The CI/CD pipeline includes fairness and robustness gates. A cybersecurity threat model exists. Priorities are the post-market monitoring system, serious incident reporting, the human oversight interface, and conformity assessment preparation.
Level 4 (Operational): conformity assessments are completed, Declarations of Conformity are signed, and high-risk systems are registered. The post-market monitoring system is producing data, operators are trained, and incident response is in place. Priorities are the feedback loop, oversight culture, inspection readiness, and extending the programme to medium-risk and new systems.
A structured readiness assessment identifies prerequisites that should be in place before systematic AISDP work begins.
A structured readiness assessment identifies prerequisites that should be in place before systematic AISDP work begins. The assessment covers five dimensions, each corresponding to a category of capability that the end-to-end delivery process assumes.
Governance readiness requires all roles to be appointed: AI Governance Lead with sufficient authority, AI System Assessor(s), Legal and Regulatory Advisor, DPO Liaison, and Conformity Assessment Coordinator. Roles must be documented and communicated. Classification readiness requires a complete inventory of AI systems, a Classification Decision Record for each system, and risk tier determinations reviewed by the AI Governance Lead.
Infrastructure readiness covers version control for code, data, and model artefacts; a model registry integrated with the CI/CD pipeline; model validation gates for performance, fairness, and robustness; monitoring infrastructure for production data; and a document management system with version control and access controls.
Process readiness encompasses the data governance framework, development methodology documentation, incident response plan, post-market monitoring plan, and end-of-life process. The end-of-life process must include decommissioning criteria, deployer notification templates, credential revocation procedures, data lifecycle closure, and documentation archival. Knowledge readiness requires AI Act training appropriate to each role. Each dimension that is not yet in place represents a workstream to initiate in parallel with AISDP preparation. covers the infrastructure requirements for the assessment itself.
AISDP preparation is a significant investment that organisations must budget for across four resource categories.
AISDP preparation is a significant investment that organisations must budget for across four resource categories. Understanding the cost structure helps prevent the common mistake of attempting compliance through last-minute documentation sprints, which costs more, produces lower quality, and carries greater enforcement risk.
Initial preparation for a medium-complexity high-risk system estimates 20 to 28 weeks from initiation to deployment. Effort distributes as approximately half an FTE for the AI System Assessor, a third of an FTE for the Technical SME, and roughly a tenth of an FTE each for the Legal and Regulatory Advisor and AI Governance Lead. Engineering effort varies from minor configuration changes for well-designed systems to substantial rearchitecture for systems not designed with compliance in mind.
Ongoing compliance requires approximately 15 to 25 per cent of the system's annual development cost, covering post-market monitoring operation, incident response, quarterly reviews, annual audits, AISDP maintenance, and regulatory interaction. Infrastructure costs (version control, model registry, CI/CD, monitoring, evidence repository, document management) are largely shared across the AI system portfolio. The initial investment is front-loaded; ongoing costs are primarily storage and compute for monitoring data.
External support includes annual penetration testing, notified body engagement where required, external legal advice, and translation services for multi-jurisdiction deployment. covers training programmes, AI literacy initiatives, and cultural change. These are portfolio-level investments. maps the specific tools and their licence models.
Six upstream-downstream relationships cause the most consequential compliance failures when the upstream domain is weak.
Six upstream-downstream relationships cause the most consequential compliance failures when the upstream domain is weak. Understanding these propagation chains is essential for prioritising remediation and resource allocation.
Risk assessment (covered in Risk Management for High-Risk AI) is upstream of all domains. Thresholds, controls, and monitoring parameters are calibrated to the risk register. An underestimated risk produces proportionately weak controls everywhere. Data governance is upstream of model performance, fairness evaluation, and CI/CD gates. Unrepresentative or poorly documented training data cannot be compensated for by post-training mitigation.
Version control is upstream of traceability and conformity assessment. Without version linkage between code, data, model, and configuration, the declaration of conformity cannot be substantiated. The CI/CD pipeline is upstream of the evidence pack and AISDP currency. Misconfigured gates or insufficient audit logging leave gaps that the internal assessment will flag.
Post-market monitoring is upstream of the risk register and all operational domains. If the feedback loop is not operationalised, findings accumulate without triggering corrective action. Operational oversight is upstream of all operational domains through the culture multiplier: technical controls are only effective if operators escalate, managers act, and executives are informed. Operational Oversight and Human Control covers the culture dimension.
Start with the domains that propagate failures most widely.
Start with the domains that propagate failures most widely. Risk assessment and data governance are prerequisites for every downstream compliance activity. Version control and CI/CD integration come next, enabling automated evidence generation. Post-market monitoring and operational oversight complete the operational layer.
The maturity model provides the sequencing framework. Level 1 to Level 2 is about establishing foundations: classify systems, appoint governance roles, begin AISDP preparation for the highest-risk systems, deploy data version control. Level 2 to Level 3 builds the structured compliance engineering pipeline: CI/CD gates, fairness testing, cybersecurity threat models, conformity assessment frameworks.
Level 3 to Level 4 operationalises compliance: complete conformity assessments, sign Declarations of Conformity, register systems, deploy post-market monitoring, train operators, establish incident response. Level 4 to Level 5 embeds compliance into the engineering and governance workflow so that evidence generation is automatic and the feedback loop operates continuously.
Each readiness dimension that scores below target represents a workstream with dependencies on other dimensions. Governance readiness enables all others. Infrastructure readiness enables process readiness. Knowledge readiness enables operational execution. The assessment should be repeated quarterly to track progress and identify emerging gaps. End-to-End Delivery Process provides the phase-by-phase delivery framework.
For a Level 1 organisation with a medium-complexity high-risk system, reaching Level 4 (Operational) takes approximately 20 to 28 weeks of structured work. The timeline varies based on existing infrastructure, team expertise, and system complexity. Starting from Level 2 shortens the path significantly.
The end-to-end delivery process assumes governance, classification, infrastructure, process, and knowledge prerequisites are satisfied. Organisations that begin without them encounter delays and rework. Each unmet prerequisite becomes a parallel workstream that competes for the same resources.
Yes. Version control, model registry, CI/CD pipeline, monitoring infrastructure, evidence repository, and document management are shared across the AI system portfolio. Per-system costs are primarily for the assessor and SME effort during initial preparation and for ongoing monitoring compute and storage.
Technical controls are only effective if operators escalate concerns, managers act on escalations, and executives are informed. A culture that suppresses escalation undermines every technical safeguard. Operational oversight multiplies (or divides) the effectiveness of every other domain.
Five dimensions: governance readiness (roles appointed), classification readiness (system inventory complete), infrastructure readiness (version control and CI/CD operational), process readiness (frameworks documented), and knowledge readiness (role-appropriate training delivered).
Risk assessment is upstream of all domains. Data governance feeds model performance and fairness. Version control enables conformity traceability. CI/CD generates evidence. PMM closes the feedback loop. Operational oversight acts as a culture multiplier.
Level 5 (Embedded): compliance is a natural byproduct of engineering and governance workflow. Evidence is generated automatically. The feedback loop operates continuously. Priorities are continuous improvement: refining thresholds from operational experience, incorporating harmonised standards, and adapting to regulatory developments.
Most organisations in early 2026 are between Level 1 and Level 2. The goal is to reach Level 4 before the August 2026 deadline for full high-risk framework application. End-to-End Delivery Process describes the timeline for moving through these levels.