We use cookies to improve your experience and analyse site traffic.
The AISDP delivery process follows seven phases spanning 20 to 28 weeks: discovery and classification, risk assessment, architecture and design, development and testing, pre-deployment validation, registration and deployment, and ongoing operational monitoring. Each phase has defined activities, artefacts, and governance gates. Compliance evidence is generated as a natural byproduct of the engineering workflow, not bolted on at the end.
Phase 1 spans Weeks 1 to 3 and determines whether the system falls within the AI Act's scope, classifies its risk tier, and produces the Classification Decision Record.
Phase 1 spans Weeks 1 to 3 and determines whether the system falls within the AI Act's scope, classifies its risk tier, and produces the Classification Decision Record. The AI System Assessor examines the system against the Article 3(1) definition of an AI system. If the system meets the definition, the Assessor classifies it against the risk tiers: prohibited under Article 5, high-risk under Articles 6 to 7 and Annex III, limited risk under Article 50, or minimal risk.
For systems falling within Annex III categories, the Assessor evaluates the Article 6(3) exception by testing the functional criterion and the risk criterion separately. The functional criterion covers whether the system performs narrow procedural tasks, improves the results of previously completed human activities, or detects decision-making patterns without replacing human assessment. The risk criterion covers whether the system poses a significant risk of harm to health, safety, or fundamental rights. Both criteria must be satisfied for the exception to apply.
The Classification Reviewer independently reviews the Assessor's determination, providing a separate assessment that either confirms or challenges the classification. Disagreements between the Assessor and the Reviewer are escalated to the AI Governance Lead for resolution.
Phase 1 produces three artefacts: the Classification Decision Record including the system's description, the classification determination with rationale, the Article 6(3) assessment where applicable, and the evidence supporting the classification; the initial risk profile identifying the regulatory obligations triggered by the classification; and the evidence pack containing the source materials that informed the classification decision. The phase gate requires the AI Governance Lead to approve the CDR before Phase 2 begins, ensuring that the classification has governance-level endorsement before resources are committed to the system's development.
Phase 2 spans Weeks 2 to 6 and conducts the comprehensive risk assessment that informs all subsequent design and development decisions.
Phase 2 spans Weeks 2 to 6 and conducts the comprehensive risk assessment that informs all subsequent design and development decisions. The Technical SME and AI System Assessor conduct the five-method risk identification covering FMEA, stakeholder consultation, regulatory gap analysis, adversarial red-teaming, and horizon scanning. The risk register is established, with each risk scored across four dimensions: health and safety, fundamental rights, operational integrity, and reputational exposure. Residual risk acceptability is assessed against Article 9(4)'s standard of reduction as far as possible.
For deployers of high-risk systems, the Fundamental Rights Impact Assessment required by Article 27 is conducted in parallel. The fria examines the impact on all potentially affected EU Charter rights, with particular attention to intersectional effects. The reputational risk framework assesses customer, market, regulatory, shareholder, and employee dimensions for each technical risk.
Phase 2 produces four key artefacts: the risk register populating aisdp Module 6 with each risk scored and its mitigation documented; the FRIA report populating Module 11 with the fundamental rights assessment for deployer contexts; the reputational risk assessment covering five dimensions of reputational exposure; and the risk mitigation plan with assigned owners and timelines for each mitigation action. The phase gate requires the AI Governance Lead to review the risk register, assess whether the residual risk profile after mitigation falls within the organisation's risk appetite, and accept the profile before development resources are committed.
Phase 3 spans Weeks 4 to 8 and designs the system architecture informed by the risk assessment, selects the model approach, and establishes the data governance framework.
Phase 3 spans Weeks 4 to 8 and designs the system architecture informed by the risk assessment, selects the model approach, and establishes the data governance framework. The Statement of Business Intent is drafted and approved. The ethical foundation and transparency commitment are documented.
Model selection is conducted using the compliance criteria: documentability, testability, auditability, bias detectability, maintainability, and determinism. The full spectrum of decisioning approaches is evaluated covering heuristic systems, statistical models, neural networks, and LLMs. Model origin risk, copyright risk, and nation-alignment risk are assessed.
The layered architecture is designed with per-layer compensating controls against intent and outcome drift. The data governance framework is established including dataset documentation requirements, data lineage infrastructure, fairness assessment methodology, and special category data handling procedures. Version control strategy, CI/CD pipeline design, and infrastructure-as-code approach are defined. The cybersecurity threat model is developed using STRIDE/PASTA.
Phase 3 produces the Statement of Business Intent approved by the AI Governance Lead and Business Owner, the model selection rationale document populating AISDP Module 3, the system architecture document with dependency maps also populating Module 3, the data governance plan populating Module 4, the version control and CI/CD design populating Module 2, and the cybersecurity threat model populating Module 9.
Phase 4 spans Weeks 6 to 18 and is the longest phase, building the system in accordance with the approved architecture with compliance evidence generated as a natural byproduct of the engineering workflow.
Phase 4 spans Weeks 6 to 18 and is the longest phase, building the system in accordance with the approved architecture with compliance evidence generated as a natural byproduct of the engineering workflow. Development follows the approved architecture using version-controlled code, model, and data artefacts. The CI/CD pipeline enforces quality gates at every commit: static analysis including AI-specific rules, unit testing, contract testing between services, dependency and licence scanning, and secret detection.
Data engineering follows the pre-step/post-step capture methodology with each transformation documented before execution and verified after execution. Dataset documentation is maintained continuously. Model training, validation, and testing follow the documented methodology. Performance, fairness, robustness, and calibration metrics are computed and recorded. The model validation gate blocks promotion of any model failing to meet AISDP-declared thresholds.
The human oversight interface is developed with automation bias countermeasures, mandatory review workflows, and override capability. The explainability layer is implemented with fidelity validation. Cybersecurity testing is integrated throughout: SAST and DAST in the CI pipeline, dependency scanning, container image scanning, and infrastructure-as-code scanning. Adversarial ML testing covers adversarial examples, data poisoning simulations, and prompt injection testing where applicable.
Phase 4 continuously produces version-controlled code, model artefacts, and dataset versions alongside automated test reports covering unit, integration, regression, fairness, and robustness testing. Model cards are auto-generated from the CI pipeline's evaluation results. Data quality reports are produced at each data transformation stage. Training pipeline logs and metadata provide the reproducibility evidence for AISDP Module 2. Cybersecurity scan results and remediation records are retained for Module 9. The feature registry with proxy variable assessments documents each feature's correlation with protected characteristics.
Phase 5 spans Weeks 16 to 20 and validates the complete system in a production-representative environment while compiling the AISDP.
Phase 5 spans Weeks 16 to 20 and validates the complete system in a production-representative environment while compiling the AISDP. The system is deployed to staging. End-to-end inference tests, regression tests, and chaos/fault injection tests are executed against production-representative data. Performance, fairness, and robustness metrics are compared against declared thresholds.
The AISDP is compiled from the artefacts produced during development, with each module populated from the corresponding engineering artefacts rather than written from scratch. The Conformity Assessment Coordinator reviews each module for completeness and consistency against the Annex IV requirements.
The internal conformity assessment under Annex VI is conducted in three workstreams. First, a QMS assessment verifies that all Article 17 elements are operational, documented, and evidenced. Second, a technical documentation assessment examines each AISDP module against Articles 8 through 15 for content completeness, evidence sufficiency, consistency, and currency. Third, a consistency assessment traces from the AISDP to the source artefacts, verifying that the documentation accurately describes the system as built. Non-conformities are recorded in the Non-Conformity Register and remediated.
The operational oversight framework from the oversight pyramid is established: monitoring infrastructure is configured, alerting thresholds are set, escalation procedures are documented, break-glass procedures are tested, and operator training is completed. Phase 5 produces the complete AISDP, the internal conformity assessment report, the Non-Conformity Register with all items resolved or accepted with documented rationale, the assessment evidence register, operational oversight readiness confirmation, and operator training records. The AI Governance Lead reviews the assessment report and signs the Declaration of Conformity.
Phase 7 is ongoing and maintains the system's compliance posture throughout its operational lifetime through continuous monitoring, periodic review, and responsive action.
Phase 7 is ongoing and maintains the system's compliance posture throughout its operational lifetime through continuous monitoring, periodic review, and responsive action. The PMM system operates continuously, collecting performance, fairness, data drift, operational, and human oversight metrics. Alerts are generated and triaged according to the severity framework.
The AI Governance Lead convenes quarterly PMM review meetings examining monitoring trends, operator escalation patterns, deployer feedback, complaint volumes, and the non-conformity register. The Internal Audit Assurance Lead conducts an annual oversight audit testing monitoring infrastructure, escalation pathways, break-glass procedures, training currency, and non-retaliation commitments.
Serious incidents are detected, triaged, reported, investigated, and remediated in accordance with the Article 73 process. Evidence is preserved and systems are not altered prior to authority notification. System changes are managed through the version control and CI/CD framework, with each change assessed against the substantial modification thresholds. Changes crossing the threshold trigger a new conformity assessment cycle returning to Phase 5. Changes below the threshold are documented in Module 12.
Regulatory developments are monitored by the Legal and Regulatory Advisor: new guidance from the AI Office, enforcement actions taken by competent authorities against comparable systems, harmonised standards publications that may change the compliance baseline, and amendments to the Act's Annexes that could affect the system's classification or obligations. Each development is assessed for its impact on the system's compliance posture and, where relevant, triggers AISDP updates, reclassification reviews, or operational changes.
No. Phases overlap: risk assessment informs architecture which informs development, and development begins before risk assessment is fully complete.
At the Phase 5 gate, after the AI Governance Lead reviews the assessment report, all critical non-conformities are resolved, and the conformity assessment record supports an unqualified finding.
It triggers a new conformity assessment cycle, returning to Phase 5. Changes below the threshold are documented in the AISDP change history.
The gate requires architecture review by the Technical SME, Legal and Regulatory Advisor, and AI Governance Lead, with formal sign-off that the design satisfies the risk mitigation plan established in Phase 2.
Three gates control Phase 4 progression: the model validation gate which is automated and blocks any model failing declared thresholds; the security review gate which is manual for the first deployment; and the integration test suite which must pass with all contract tests green before the system can proceed to Phase 5.
Phase 6 spans Weeks 20 to 22 and registers the system in the EU database, affixes CE marking, and deploys to production. The Conformity Assessment Coordinator registers the provider and the system in the EU database, submitting the Annex VIII information. For multi-jurisdiction deployments, the coordinator ensures that registration information reflects all deployment member states. For sensitive domain systems covering law enforcement, migration, and border control, registration is submitted to the secure non-public section. The CE marking is affixed to the user interface and accompanying documentation.
Deployment follows the CI/CD pipeline's compliance controls: staging validation, canary or shadow deployment, human approval gate with the AI Governance Lead authorising the initial deployment, and immutable deployment logging. Deployers are provided with the Instructions for Use under Article 13, documenting the system's intended purpose, capabilities and limitations, performance characteristics, human oversight requirements, and maintenance obligations. Phase 6 produces the EU database registration confirmation, CE marking evidence, the deployment ledger entry, deployer communication records, and the signed Declaration of Conformity filed with the AISDP.
The AISDP is maintained as a living document throughout Phase 7. Each material change to the system, its documentation, or its operational context creates a new AISDP version. The version history demonstrates the organisation's continuous compliance discipline to any competent authority that requests evidence of ongoing management.
Phase 7 continuously produces monthly PMM reports, quarterly review meeting minutes and action items, the annual oversight audit report, and serious incident reports as needed within the required timelines. AISDP version updates, updated risk register entries, and regulatory horizon scanning summaries complete the ongoing evidence trail.
The phase gates for Phase 7 include quarterly PMM review approval by the AI Governance Lead, annual audit sign-off by the Internal Audit Assurance Lead, and serious incident response completion confirmed jointly by the AI Governance Lead and Legal and Regulatory Advisor.
Total elapsed time from initiation to production deployment is typically 20 to 28 weeks for a medium-complexity high-risk system with cooperative stakeholders. The key owners progress through the phases: the AI System Assessor leads Phase 1, the Technical SME and Assessor lead Phase 2, the Technical Owner leads Phase 3, the engineering team leads Phase 4, the Conformity Assessment Coordinator leads Phases 5 and 6, and the AI Governance Lead leads Phase 7.
Phases overlap by design: risk assessment informs architecture which informs development, and development begins before risk assessment is fully complete. Phase 2 (Weeks 2 to 6) overlaps with Phase 3 (Weeks 4 to 8), and Phase 4 (Weeks 6 to 18) begins well before Phase 2 concludes. This overlapping structure reduces the total elapsed time while maintaining the gate discipline that ensures each phase's outputs are validated before dependent work progresses too far. The timeline assumes the organisation has established the foundational infrastructure covering version control, CI/CD, and monitoring before commencing the system-specific workflow. Organisations without this infrastructure should add 8 to 16 weeks for its establishment, though this investment benefits all subsequent systems in the portfolio.