We use cookies to improve your experience and analyse site traffic.
The EU AI Act requires high-risk AI systems to satisfy obligations under Articles 8 to 15 before deployment. This seven-phase delivery process sequences those obligations into a practical 20-to-28-week programme with defined roles, artefacts, governance gates, and guidance for agile teams and brownfield systems.
The EU AI Act requires high-risk AI systems to satisfy obligations spanning risk management, data governance, technical documentation, human oversight, cybersecurity, and post-market monitoring.
The EU AI Act requires high-risk AI systems to satisfy obligations spanning risk management, data governance, technical documentation, human oversight, cybersecurity, and post-market monitoring. The seven-phase delivery process sequences these obligations into a practical implementation programme running 20 to 28 weeks for a medium-complexity system. Each phase has defined owners, artefacts, dependencies, and governance gates that must be cleared before the next phase begins.
The process is designed for a single high-risk AI system progressing from initial concept through to production operation and ongoing compliance. Organisations with multiple systems should run parallel instances of this workflow, coordinated through the AI Governance Lead. The phases overlap deliberately: risk assessment informs architecture, which informs development, and development begins before risk assessment is fully complete. This concurrency is necessary to meet the August 2026 compliance deadline without sacrificing rigour.
The workflow assumes that foundational infrastructure (version control, CI/CD, monitoring) is already in place before the system-specific programme commences. Organisations lacking this infrastructure should add 8 to 16 weeks for concurrent establishment, an investment that benefits all subsequent systems.
Phase 1 (Discovery and Classification, Weeks 1 to 3) determines whether the system falls within the AI Act's scope and classifies its risk tier.
Phase 1 (Discovery and Classification, Weeks 1 to 3) determines whether the system falls within the AI Act's scope and classifies its risk tier. The AI System Assessor examines the system against the Article 3 definition of an AI system. If the system meets the definition, the Assessor classifies it against the risk tiers: prohibited under Article 5, high-risk under Articles 6 to 7 and Annex III, limited risk under Article 50, or minimal risk.
For systems falling within Annex III categories, the Assessor evaluates the Article 6 exception by testing the functional criterion and the risk criterion separately. The functional criterion covers whether the system performs narrow procedural tasks, improves results of previously completed human activities, or detects decision-making patterns without replacing human assessment. The risk criterion covers whether the system poses a significant risk of harm to health, safety, or fundamental rights. Both criteria must be satisfied for the exception to apply.
The Classification Reviewer independently reviews the Assessor's determination, with disagreements escalated to the AI Governance Lead. Phase 1 produces the Classification Decision Record (CDR), including system description, classification determination with rationale, the Article 6 assessment where applicable, and supporting evidence. The AI Governance Lead must approve the CDR before Phase 2 begins.
Phase 2 (Risk Assessment and FRIA, Weeks 2 to 6) conducts the comprehensive risk assessment that informs all subsequent design and development decisions.
Phase 2 (Risk Assessment and fria, Weeks 2 to 6) conducts the comprehensive risk assessment that informs all subsequent design and development decisions. The Technical SME and AI System Assessor conduct five-method risk identification: FMEA, stakeholder consultation, regulatory gap analysis, adversarial red-teaming, and horizon scanning. Each risk is scored across four dimensions covering health and safety, fundamental rights, operational integrity, and reputational exposure.
Residual risk acceptability is assessed against Article 9's "as far as possible" standard. For deployers of high-risk systems, the Fundamental Rights Impact Assessment required by Article 27 runs in parallel. The FRIA examines impact on all potentially affected EU Charter rights, with particular attention to intersectional effects.
The reputational risk framework assesses customer, market, regulatory, shareholder, and employee reputational dimensions for each technical risk. Phase 2 produces the risk register (populating aisdp Module 6), the FRIA report (populating AISDP Module 11), the reputational risk assessment, and the risk mitigation plan with assigned owners and timelines. The AI Governance Lead reviews the risk register and accepts the residual risk profile before development proceeds.
Phase 3 (Architecture and Design, Weeks 4 to 8) designs the system architecture informed by the risk assessment.
Phase 3 (Architecture and Design, Weeks 4 to 8) designs the system architecture informed by the risk assessment. The Statement of Business Intent is drafted and approved, documenting the ethical foundation and transparency commitment. Model selection uses compliance criteria: documentability, testability, auditability, bias detectability, maintainability, and determinism. The full spectrum of decisioning approaches is evaluated, from heuristic systems through statistical models, neural networks, and LLMs.
Model origin risk, copyright risk, and nation-alignment risk are assessed for each candidate approach. The layered architecture is designed with per-layer compensating controls against intent and outcome drift. The data governance framework is established, including dataset documentation requirements, data lineage infrastructure, fairness assessment methodology, and special category data handling procedures.
Version control strategy, CI/CD pipeline design, and infrastructure-as-code approach are defined during this phase. The cybersecurity threat model is developed using STRIDE/PASTA. Phase 3 produces the Statement of Business Intent, model selection rationale (AISDP Module 3), system architecture with dependency maps (AISDP Module 3), data governance plan (AISDP Module 4), version control and CI/CD design (AISDP Module 2), and cybersecurity threat model (AISDP Module 9).
An architecture review by the Technical SME, Legal and Regulatory Advisor, and AI Governance Lead provides formal sign-off that the design satisfies the risk mitigation plan.
Phase 4 (Development and Testing, Weeks 6 to 18) builds the system with compliance evidence generated as a natural byproduct of the engineering workflow.
Phase 4 (Development and Testing, Weeks 6 to 18) builds the system with compliance evidence generated as a natural byproduct of the engineering workflow. Development follows the approved architecture using version-controlled code, model, and data artefacts. The CI/CD pipeline enforces quality gates at every commit: static analysis including AI-specific rules, unit testing for every component type, contract testing between services, dependency and licence scanning, and secret detection.
Data engineering follows a pre-step/post-step capture methodology, with each transformation documented before execution and verified after execution. Dataset documentation is maintained continuously as datasets are assembled, cleaned, and transformed. Model training, validation, and testing follow documented methodology. Performance, fairness, robustness, and calibration metrics are computed and recorded.
The model validation gate in the CI pipeline blocks promotion of any model failing to meet the AISDP-declared thresholds. The human oversight interface is developed with automation bias countermeasures, mandatory review workflows, and override capability. The explainability layer is implemented with fidelity validation. Cybersecurity testing is integrated throughout: SAST and DAST in the CI pipeline, dependency scanning, container image scanning, infrastructure-as-code scanning, and adversarial ML testing including adversarial examples, data poisoning simulations, and prompt injection testing where applicable.
Phase 4 produces version-controlled artefacts, automated test reports covering unit, integration, regression, fairness, and robustness testing, auto-generated model cards, data quality reports, training pipeline logs and metadata, cybersecurity scan results, and remediation records. Gates include automated model validation, manual security review for first deployment, and automated integration test suite pass.
Phase 5 (Pre-Deployment Validation, Weeks 16 to 20) validates the complete system in a production-representative environment and compiles the AISDP.
Phase 5 (Pre-Deployment Validation, Weeks 16 to 20) validates the complete system in a production-representative environment and compiles the aisdp. The system is deployed to a staging environment where end-to-end inference tests, regression tests, and chaos/fault injection tests run against production-representative data. Performance, fairness, and robustness metrics are computed on the staging environment and compared against declared AISDP thresholds.
The AISDP is compiled from artefacts produced during development. Each module is populated from corresponding engineering artefacts, not written from scratch. The Conformity Assessment Coordinator reviews each module for completeness and consistency. The internal conformity assessment then proceeds in three stages: a QMS assessment verifying all Article 17 elements are operational, a technical documentation assessment examining the AISDP against Articles 8 to 15, and a consistency assessment tracing from the AISDP to source artefacts.
Non-conformities are recorded and remediated. The operational oversight framework is established: monitoring infrastructure configured, alerting thresholds set, escalation procedures documented, break-glass procedures tested, and operator training completed.
Phase 5 produces the complete AISDP (all twelve modules), the internal conformity assessment report, the Non-Conformity Register, the assessment evidence register, operational oversight readiness confirmation, and operator training records. The AI Governance Lead reviews the assessment report and non-conformity register. Critical non-conformities must be resolved; major non-conformities require resolution or an approved remediation plan with a timeline that does not extend beyond deployment. The AI Governance Lead then signs the Declaration of Conformity.
Phase 6 (Registration and Deployment, Weeks 20 to 22) registers the system in the EU database, affixes the CE marking, and deploys to production.
Phase 6 (Registration and Deployment, Weeks 20 to 22) registers the system in the EU database, affixes the CE marking, and deploys to production. The Conformity Assessment Coordinator registers the provider and system, submitting the Annex VIII information. For multi-jurisdiction deployments, registration information must reflect all deployment member states. Sensitive domain systems covering law enforcement, migration, and border control are submitted to the secure, non-public section.
The CE marking is affixed to the user interface and accompanying documentation. Deployment follows the CI/CD pipeline's compliance controls: staging validation, canary or shadow deployment, human approval gate, and deployment logging. The AI Governance Lead authorises initial production deployment after reviewing validation results. The deployment event is recorded in the immutable deployment ledger.
Deployers receive instructions for use as required by Article 13. These include the system's intended purpose, capabilities and limitations, performance characteristics, human oversight requirements, and maintenance obligations. Phase 6 produces EU database registration confirmation, CE marking evidence, deployment ledger entry, deployer communication records, and the signed Declaration of Conformity filed with the AISDP.
Phase 7 (Operational Monitoring and Continuous Compliance, Ongoing) maintains the system's compliance posture throughout its operational lifetime.
Phase 7 (Operational Monitoring and Continuous Compliance, Ongoing) maintains the system's compliance posture throughout its operational lifetime. The post-market monitoring system operates continuously, collecting performance, fairness, data drift, operational, and human oversight metrics. Alerts are generated and triaged according to a defined severity framework.
The AI Governance Lead convenes quarterly PMM review meetings examining monitoring trends, operator escalation patterns, deployer feedback, complaint volumes, and the non-conformity register. The Internal Audit Assurance Lead conducts an annual oversight audit testing monitoring infrastructure, escalation pathways, break-glass procedures, training currency, and non-retaliation commitments.
Serious incidents are detected, triaged, reported, investigated, and remediated under the Article 73 process. Evidence is preserved and systems are not altered prior to authority notification. System changes are managed through the version control and CI/CD framework, with each change assessed against substantial modification thresholds. Changes crossing the threshold trigger a new conformity assessment cycle returning to Phase 5. Changes below the threshold are documented in the AISDP change history (Module 12).
Regulatory developments including new AI Office guidance, competent authority enforcement actions, harmonised standards publications, and Annex amendments are monitored by the Legal and Regulatory Advisor. The AISDP is maintained as a living document, with each material change creating a new version. The version history demonstrates the organisation's continuous compliance discipline.
The seven-phase workflow presents a linear sequence, but most AI development teams use agile methodologies.
The seven-phase workflow presents a linear sequence, but most AI development teams use agile methodologies. The compliance framework must integrate with agile practices without imposing a waterfall overlay that the team will circumvent.
Sprint-level compliance activities. The Technical Owner embeds compliance tasks in the sprint cadence rather than bolting them on as separate workstreams. Each sprint includes updating relevant AISDP modules for design decisions made during the sprint, running the full test suite including fairness and robustness gates as part of the definition of done, reviewing new risks and adding them to the risk register, and updating the evidence pack with sprint artefacts. Sprint retrospectives include a compliance dimension: what evidence was generated, what gaps remain, and what risks were introduced.
Incremental AISDP assembly. The AI System Assessor assembles the AISDP incrementally from Phase 1 onwards. Module 1 (System Identity) is completed during Phase 1, Module 6 (Risk Management) is drafted during Phase 2 and updated continuously, and Module 3 (Architecture) is populated during Phase 3 and refined as the architecture evolves. By Phase 5, the AISDP should be substantially complete, requiring only final review and consistency checking rather than wholesale authoring.
Feature flags and compliance boundaries. A feature flag enabling a new model version, data source, or decision pathway constitutes a system change that must be assessed against substantial modification thresholds. Feature flag configuration should be version-controlled and activation logged in the deployment ledger.
Many organisations must bring existing AI systems into compliance.
Many organisations must bring existing AI systems into compliance. These brownfield systems may have been developed without the documentation practices, testing frameworks, or governance structures required by the AI Act. The challenge is achieving compliance without disrupting service to deployers and affected persons.
Gap assessment. The AI System Assessor examines each AISDP module and identifies what documentation exists, what is missing, what testing has been performed, what testing is needed, what governance controls are in place, and what controls are absent. The gap assessment produces a remediation plan with priorities, owners, and timelines.
Documentation reconstruction. Some documentation will need to be reconstructed from available artefacts. Training data that was not version-controlled may need statistical characterisation through analysis of the deployed model's behaviour. Architecture details not formally documented may need extraction from the codebase. Design decisions never recorded may need recovery through team interviews. The AISDP should clearly indicate where documentation has been reconstructed rather than generated contemporaneously. Transparency about reconstruction is more credible to a competent authority than retroactive documentation claiming to be original.
Retrofitting version control and testing. The current state of all artefacts is captured as a baseline version. The AISDP must document the date from which formal version control was established and acknowledge incomplete history prior to that date. Comprehensive retrospective testing covers fairness across protected characteristic subgroups, robustness, performance benchmarking, and security. Results become the baseline for evaluating future changes.
Organisations with multiple high-risk AI systems must coordinate parallel AISDP preparation tracks to avoid resource bottlenecks.
Organisations with multiple high-risk AI systems must coordinate parallel aisdp preparation tracks to avoid resource bottlenecks. Without coordination, parallel tracks compete for the same resources: the AI Governance Lead's time, the Legal and Regulatory Advisor's attention, and engineering team capacity.
Portfolio prioritisation. Systems are prioritised on four axes. Risk tier comes first, with highest-risk systems taking priority. Deployment timeline follows, addressing systems approaching deadlines before those in early development. Deployment scale matters because systems affecting more people carry greater enforcement risk. Compliance readiness factors in, as systems with less existing documentation require more effort and should start earlier.
Shared resource planning. The AI Governance Lead, Legal and Regulatory Advisor, Conformity Assessment Coordinator, and Internal Audit Assurance Lead are typically shared across the portfolio. Governance gates including CDR approval, risk register acceptance, and Declaration of Conformity signing are staggered to avoid queuing. If multiple systems reach Phase 5 simultaneously, the assessment workload may exceed available capacity.
Cross-system synergies. Systems sharing common components such as the same GPAI model, data sources, or deployment infrastructure can share compliance artefacts. A GPAI model risk assessment conducted for one system can be reused with system-specific adaptation for another. Data governance documentation for a shared data source is written once and referenced by multiple AISDs. Identifying these synergies early allows the compliance work to produce reusable artefacts.
The baseline timeline of 20 to 28 weeks applies to a medium-complexity high-risk system with cooperative stakeholders.
The baseline timeline of 20 to 28 weeks applies to a medium-complexity high-risk system with cooperative stakeholders. Several factors adjust estimates upward or downward.
Factors increasing effort. A third-party GPAI model with limited provider disclosures adds 3 to 6 weeks for compensating due diligence and testing. Brownfield deployment with limited existing documentation adds 4 to 10 weeks depending on gaps. Systems in biometric identification domains requiring third-party conformity assessment add 6 to 12 weeks for notified body engagement. Multi-member-state deployment adds 2 to 4 weeks for multi-jurisdiction registration and translation. Building compliance infrastructure from scratch (version control, CI/CD, monitoring) concurrently adds 8 to 16 weeks, though this investment benefits all subsequent systems.
Factors decreasing effort. Well-documented, intrinsically explainable model architectures reduce Phase 3 by 1 to 2 weeks. Reusing shared artefacts from a comparable AISDP reduces total effort by 20 to 30 per cent. Low-complexity applications with few input features, a single model component, and a single deployer reduce Phase 4 by 3 to 6 weeks.
Cost estimation. The fully loaded cost for preparing an AISDP for a medium-complexity high-risk system ranges from EUR 150,000 to EUR 400,000 for initial preparation, with annual ongoing compliance costs of EUR 50,000 to EUR 150,000. These figures vary by jurisdiction, organisation size, and system complexity. The AI Governance Lead validates them against specific circumstances during Phase 1.
Yes, the phases overlap deliberately. Risk assessment (Phase 2) begins in Week 2 while classification (Phase 1) is completing. Architecture (Phase 3) starts in Week 4, and development (Phase 4) begins in Week 6 before risk assessment finishes. This concurrency is necessary to meet the 20-to-28-week timeline.
A feature flag enabling a new model version, data source, or decision pathway is a system change that must be assessed against substantial modification thresholds. The flag configuration should be version-controlled and its activation logged in the deployment ledger. Changes crossing the threshold trigger a new conformity assessment cycle.
Yes. Systems sharing common components such as the same GPAI model, data sources, or deployment infrastructure can share compliance artefacts. A GPAI model risk assessment conducted for one system can be reused with system-specific adaptation for another, reducing total effort by 20 to 30 per cent for comparable systems.
Add 8 to 16 weeks to the baseline timeline for establishing version control, CI/CD pipelines, and monitoring infrastructure. This investment benefits all subsequent AI systems and should be treated as foundational work rather than system-specific compliance effort.
Typically 20 to 28 weeks for a medium-complexity high-risk system with cooperative stakeholders. Brownfield systems add 4 to 10 weeks; systems needing notified body engagement add 6 to 12 weeks; building compliance infrastructure from scratch adds 8 to 16 weeks.
The AI System Assessor tests against Article 3(1), then classifies as prohibited (Article 5), high-risk (Articles 6-7, Annex III), limited risk (Article 50), or minimal risk. Annex III systems may qualify for the Article 6(3) exception if both functional and risk criteria are met.
Embed AISDP module updates, fairness and robustness testing, risk register updates, and evidence pack maintenance in each sprint's definition of done. Assemble the AISDP incrementally from Phase 1 so Phase 5 requires review, not wholesale authoring.
Conduct a gap assessment against AISDP requirements, reconstruct documentation from available artefacts with transparency about reconstruction, retrofit version control and testing from a captured baseline, and follow a phased plan (critical gaps, documentation, infrastructure) with milestones before August 2026.
EUR 150,000 to EUR 400,000 for initial AISDP preparation for a medium-complexity high-risk system, with EUR 50,000 to EUR 150,000 annual ongoing costs. Figures vary by jurisdiction, organisation size, and complexity.
Each AISDP module is populated from engineering artefacts produced during development, not written from scratch. Sources include the Classification Decision Record, risk register, architecture documents, test reports, model cards, data quality reports, and cybersecurity scan results.
Phase 7 produces monthly PMM reports, quarterly review meeting minutes, annual oversight audit reports, serious incident reports within required timelines, AISDP version updates, updated risk register entries, and regulatory horizon scanning summaries.
Continuous conformity assessment. The CI/CD pipeline provides automated continuous checking through quality gates. Manual assessment activities are conducted by the Conformity Assessment Coordinator at defined milestones rather than concentrated at the end. This reduces the risk of discovering fundamental non-conformities late when remediation is costly.
Phased compliance. A three-phase approach may be appropriate: Phase A addresses critical gaps including human oversight controls, serious incident reporting capability, and basic PMM; Phase B addresses documentation gaps assembling the AISDP from existing and reconstructed artefacts; and Phase C addresses infrastructure gaps establishing version control, extending the CI/CD pipeline, and building monitoring infrastructure. The phased plan must be documented and approved by the AI Governance Lead with milestones demonstrating progress before the August 2026 deadline.
Portfolio governance cadence. The AI Governance Lead establishes a monthly portfolio status review tracking progress against phase milestones, a quarterly resource review assessing allocation sufficiency, and an annual strategic review assessing overall compliance posture. This cadence sits above the individual system governance cadences.
End-of-life considerations. The delivery process extends beyond deployment into operational monitoring and eventually end-of-life. Organisations should treat end-of-life as the final phase, governed by the same discipline of planning, execution, documentation, and review. The end-of-life plan is the equivalent of the deployment plan; the decommissioning record is the equivalent of the deployment evidence pack.