We use cookies to improve your experience and analyse site traffic.
The AISDP process integrates with agile development through sprint-level compliance activities and incremental document assembly. Feature flags must be managed within the compliance framework and assessed against substantial modification thresholds. Brownfield systems require gap assessment, documentation reconstruction, and phased compliance planning to meet the August 2026 deadline.
The seven-phase delivery workflow presents a linear sequence, but in practice most AI development teams use agile methodologies with iterative sprints, continuous delivery, and evolving requirements.
The seven-phase delivery workflow presents a linear sequence, but in practice most AI development teams use agile methodologies with iterative sprints, continuous delivery, and evolving requirements. The compliance framework must integrate with agile practices without imposing a waterfall overlay that the team will circumvent.
The Technical Owner embeds compliance activities in the sprint cadence, not bolted on as separate workstreams. Each sprint should include compliance-relevant tasks: updating the relevant aisdp modules for any design decisions made during the sprint, running the full test suite including fairness and robustness gates as part of the sprint's definition of done, reviewing any new risks identified during development and adding them to the risk register, and updating the evidence pack with artefacts produced during the sprint. The sprint retrospective should include a compliance dimension: what compliance evidence was generated, what gaps remain, and what risks were introduced.
The AI System Assessor assembles the AISDP incrementally throughout development, beginning from Phase 1 rather than treating it as a Phase 5 documentation exercise.
The AI System Assessor assembles the AISDP incrementally throughout development, beginning from Phase 1 rather than treating it as a Phase 5 documentation exercise. Module 1 (System Identity) is completed during Phase 1 when the system is classified. Module 6 (Risk Management) is drafted during Phase 2 when the risk register is established and updated continuously as new risks are identified. Module 3 (Architecture) is populated during Phase 3 when the layered architecture is designed and refined as the architecture evolves through development.
Module 4 (Data Governance) grows as the data engineering work progresses through Phase 4. By the time Phase 5 arrives, the AISDP should be substantially complete, requiring only final review and consistency checking rather than wholesale authoring. This incremental approach distributes the documentation burden across the entire development lifecycle, ensures that documentation reflects the system as it was built rather than as it was imagined at design time, and reduces the risk of discovering fundamental non-conformities late in the cycle when remediation is costly.
Agile teams frequently use feature flags to deploy partially complete features behind toggles.
Agile teams frequently use feature flags to deploy partially complete features behind toggles. For high-risk AI systems, the Technical Owner manages feature flags within the compliance framework. A feature flag that enables a new model version, a new data source, or a new decision pathway is a system change that the AI System Assessor assesses against the substantial modification thresholds. The feature flag configuration should be version-controlled and the engineering team logs each flag's activation in the deployment ledger.
The organisation should check conformity continuously throughout development, spreading the assessment workload across the full lifecycle rather than concentrating it at Phase 5. The CI/CD pipeline's quality gates provide automated continuous checking: every commit that changes the model, data, features, or configuration triggers the validation gates, the fairness gates, and the security scans. Manual assessment activities, including documentation review and evidence verification, are conducted by the Conformity Assessment Coordinator at defined milestones throughout the development process. This continuous conformity assessment approach reduces the risk of discovering fundamental non-conformities late in the development cycle when remediation is costly and the deployment timeline is under pressure.
Many organisations will need to bring existing AI systems into compliance.
Many organisations will need to bring existing AI systems into compliance. These brownfield systems may have been developed before the AI Act was enacted, without the documentation practices, testing frameworks, or governance structures that compliance requires. The challenge is to achieve compliance for systems that are already operating without disrupting their service to deployers and affected persons.
The first step is a gap assessment that compares the existing system against the AISDP requirements. The AI System Assessor examines each AISDP module and identifies what documentation exists and what is missing, what testing has been performed and what is needed, and what governance controls are in place and what are absent. The gap assessment produces a remediation plan with priorities, owners, and timelines.
For brownfield systems, some documentation will need to be reconstructed from available artefacts. Training data that was not version-controlled may need to be characterised through statistical analysis of the deployed model's behaviour. Model architecture details that were not formally documented may need to be extracted from the codebase. Design decisions that were never recorded may need to be recovered through interviews with the development team. The AISDP should clearly indicate where documentation has been reconstructed rather than generated contemporaneously. Transparency about the reconstruction process is more credible to a competent authority than retroactive documentation that claims to be original.
Systems that were not developed under version control can be brought into the framework from the current point forward. The current state of all artefacts covering code, models, data, and configuration is captured by the Technical SME as a baseline version. The AISDP documents the date from which formal version control was established and acknowledges that the version history prior to that date is incomplete.
No. Incremental assembly from Phase 1 reduces the risk of discovering fundamental non-conformities late when remediation is costly.
Yes, but the AISDP should clearly indicate where documentation was reconstructed rather than generated contemporaneously. Transparency about the process is more credible than claiming original provenance.
A compliance dimension covering what evidence was generated, what gaps remain, and what risks were introduced during the sprint.
Yes. Phase A addresses critical gaps, Phase B addresses documentation, and Phase C addresses infrastructure, with milestones toward the August 2026 deadline.
Systems that were not subject to the full test suite should undergo comprehensive retrospective testing: fairness testing across all protected characteristic subgroups, robustness testing with adversarial examples and input perturbation, performance benchmarking against the thresholds the AISDP will declare, and security testing. The results become the baseline against which future changes are evaluated.
A phased approach spreading compliance over eighteen to twenty-four months may be appropriate for brownfield systems. Phase A addresses critical gaps including human oversight controls, serious incident reporting capability, and basic PMM. Phase B addresses documentation gaps by assembling the AISDP from existing and reconstructed artefacts. Phase C addresses infrastructure gaps by establishing version control, extending the CI/CD pipeline, and building the monitoring infrastructure. The phased plan must be documented and approved by the AI Governance Lead, with milestones demonstrating progress toward full compliance before the August 2026 deadline.