We use cookies to improve your experience and analyse site traffic.
Organisations operating AI systems developed before the EU AI Act face a distinct compliance challenge. Brownfield systems require gap assessment, documentation reconstruction, retrofitted version control, and retrospective testing to meet the regulatory standard without disrupting ongoing operations.
Organisations with AI systems already in production must bring those systems into compliance with the EU AI Act without disrupting their ongoing service.
Organisations with AI systems already in production must bring those systems into compliance with the EU AI Act without disrupting their ongoing service. These brownfield systems may have been developed before the Act was enacted, without the documentation practices, testing frameworks, or governance structures that compliance demands. The challenge is achieving full compliance for systems that are already operating, serving deployers and affected persons, while meeting the regulatory deadline.
The starting point is a structured gap assessment. The ai system assessor examines each module of the aisdp and identifies what documentation exists, what is missing, what testing has been performed, what testing remains outstanding, what governance controls are in place, and what controls are absent. This gap assessment produces a remediation plan with priorities, owners, and timelines that guides the entire brownfield compliance effort.
The remediation plan must account for the fact that the system continues to operate throughout the compliance process. Any changes to the system during remediation, whether to add monitoring, restructure data pipelines, or retrofit governance controls, must be managed to avoid disrupting service to deployers and affected persons. Delivery Process and Timeline covers the broader phased delivery framework within which brownfield remediation sits.
Documentation reconstruction is the most labour-intensive aspect of brownfield compliance because records were often not created during original development.
Documentation reconstruction is the most labour-intensive aspect of brownfield compliance because records were often not created during original development. For systems where training data was not version-controlled at the time of development, the data may need to be characterised through statistical analysis of the deployed model's behaviour. Model architecture details that were never formally documented may need to be extracted directly from the codebase by the development team.
Design decisions that were never recorded present a particular challenge. These may need to be recovered through interviews with the original development team, capturing rationale that exists only in institutional memory. The AISDP should clearly indicate where documentation has been reconstructed rather than generated contemporaneously during development. Transparency about the reconstruction process is more credible to a competent authority than retroactive documentation that claims to be original. This distinction matters during any future conformity assessment or market surveillance review.
Systems that were not developed under version control can be brought into a version control framework from the current point forward.
Systems that were not developed under version control can be brought into a version control framework from the current point forward. The Technical SME captures the current state of all artefacts, including code, models, data, and configuration, as a baseline version. This snapshot establishes the starting point for formal change tracking.
The AISDP must document the date from which formal version control was established and acknowledge that the version history prior to that date is incomplete. All future changes from the baseline are then fully version-controlled, providing the audit trail that the regulation expects. Version Control and Traceability provides detailed guidance on establishing version control practices for AI system artefacts.
Systems that were not subject to comprehensive testing during development should undergo retrospective testing to establish a compliance baseline.
Systems that were not subject to comprehensive testing during development should undergo retrospective testing to establish a compliance baseline. This testing covers fairness testing across all protected characteristic subgroups, robustness testing using adversarial examples and input perturbation techniques, performance benchmarking against the thresholds that the AISDP will declare, and security testing of the deployed system.
The results of this retrospective testing become the baseline against which all future changes are evaluated. Any modification to the system after the baseline is established triggers retesting against the declared thresholds, following the same post-market monitoring discipline that applies to newly developed systems. This baseline approach ensures that brownfield systems, once brought into compliance, maintain the same standard of ongoing assurance as greenfield deployments.
Brownfield compliance need not be achieved in a single effort; a phased approach is often more practical for complex systems already in production.
Brownfield compliance need not be achieved in a single effort; a phased approach is often more practical for complex systems already in production. Phase A addresses critical gaps first: human oversight controls, serious incident reporting capability, and basic post-market monitoring. These are prioritised because they address the most immediate regulatory risks.
Phase B then addresses documentation gaps by assembling the AISDP from existing and reconstructed artefacts, bringing the evidentiary record to the standard required for conformity assessment. Phase C addresses infrastructure gaps: establishing version control, extending the CI/CD pipeline, and building the monitoring infrastructure needed for ongoing compliance. The phased plan must be documented and approved by the ai governance lead, with milestones that demonstrate progress toward full compliance before the August 2026 deadline.
Organisations with multiple high-risk AI systems must coordinate parallel AISDP preparation tracks to avoid resource bottlenecks.
Organisations with multiple high-risk AI systems must coordinate parallel AISDP preparation tracks to avoid resource bottlenecks. Without coordination, parallel tracks compete for the same people: the AI Governance Lead's time, the legal and regulatory advisor's attention, and the engineering team's capacity.
Portfolio prioritisation should follow four axes. Risk tier comes first, with highest-risk systems taking priority. Deployment timeline follows, addressing systems approaching deadlines before those in early development. Deployment scale matters because systems affecting more people carry greater enforcement risk. Compliance readiness also factors in; systems with less existing documentation require more effort and should start earlier.
The AI Governance Lead, Legal and Regulatory Advisor, Conformity Assessment Coordinator, and Internal Audit Assurance Lead are typically shared across the portfolio. Their availability must be planned against the portfolio's milestone calendar. Governance gates such as CDR approval, risk register acceptance, and Declaration of Conformity signing should be staggered to avoid queuing. If multiple systems reach the conformity assessment phase simultaneously, the assessment workload may exceed the available capacity.
Systems that share common components, such as the same GPAI model, the same data sources, or the same deployment infrastructure, can share compliance artefacts to reduce duplication of effort. A GPAI model risk assessment conducted for one system can be reused with system-specific adaptation for another. Data governance documentation for a shared data source is written once and referenced by multiple AISDs, avoiding duplicated effort across parallel tracks. The organisation should identify these synergies early in the portfolio planning process and structure the compliance work to produce reusable artefacts where possible. covers the governance structures that support this coordination.
The AI Governance Lead should establish a portfolio-level governance cadence that sits above the individual system governance for each brownfield remediation track.
The AI Governance Lead should establish a portfolio-level governance cadence that sits above the individual system governance for each brownfield remediation track. A monthly portfolio status review tracks each system's progress against its phase milestones, identifying delays or resource conflicts before they become critical.
A quarterly resource review assesses whether the planned resource allocation is sufficient given actual progress and any changes in scope or priority. An annual strategic review assesses the portfolio's overall compliance posture and plans for the coming year. This layered cadence ensures that brownfield compliance across multiple systems remains coordinated, with shared resources allocated where they are most needed.
Resource estimation for brownfield systems requires adjusting the standard delivery timeline to account for the additional work of bringing existing systems into compliance.
Resource estimation for brownfield systems requires adjusting the standard delivery timeline to account for the additional work of bringing existing systems into compliance. Several factors increase the effort required.
If the system incorporates a third-party GPAI model with limited provider disclosures, organisations should allow an additional three to six weeks for compensating due diligence and testing. A brownfield deployment with limited existing documentation adds four to ten weeks depending on the scale of documentation gaps. Systems in a biometric identification domain requiring third-party conformity assessment add six to twelve weeks for notified body engagement. Multi-member-state deployment adds two to four weeks for multi-jurisdiction registration and translation requirements.
Where the organisation has no existing compliance infrastructure covering version control, CI/CD, and monitoring and must build it concurrently, an additional eight to sixteen weeks may be needed, though this investment benefits all subsequent systems.
Factors can also decrease effort. Systems using a well-documented, intrinsically explainable model architecture can reduce the technical documentation phase by one to two weeks. Organisations that have completed AISDP preparation for a comparable system and can reuse shared artefacts may reduce total effort by twenty to thirty percent. Low-complexity applications with a small number of input features, a single model component, and a single deployer can reduce the testing and validation phase by three to six weeks.
As a rough guide, the fully loaded cost for preparing an AISDP for a medium-complexity high-risk system ranges from EUR 150,000 to EUR 400,000 for initial preparation, with annual ongoing compliance costs of EUR 50,000 to EUR 150,000. These figures vary widely by jurisdiction, organisation size, and system complexity. The AI Governance Lead should validate them against the organisation's specific circumstances during the initial planning phase. Where the organisation has no existing compliance infrastructure and must build it concurrently, the upfront investment is higher but benefits all subsequent systems brought into compliance.
Where original documentation does not exist, it should be reconstructed from available artefacts such as codebases, model behaviour analysis, and developer interviews. The AISDP should clearly indicate that documentation was reconstructed, as transparency is more credible to a competent authority than retroactive documentation that claims to be original.
Yes. A three-phase approach is recommended: Phase A addresses critical gaps such as human oversight controls and incident reporting, Phase B assembles the AISDP from existing and reconstructed artefacts, and Phase C builds infrastructure for version control and monitoring. The plan must be documented with milestones showing progress before the August 2026 deadline.
Capture the current state of all artefacts as a baseline version and begin formal version control from that point forward. The AISDP must document when version control was established and acknowledge that prior history is incomplete. All changes after the baseline are then fully tracked.
Yes. Phase A addresses critical gaps like human oversight and incident reporting, Phase B assembles documentation, and Phase C builds infrastructure, with milestones approved by the AI Governance Lead.
Fairness testing across protected characteristic subgroups, robustness testing with adversarial examples, performance benchmarking against declared thresholds, and security testing.
For a medium-complexity high-risk system, EUR 150,000 to EUR 400,000 for initial preparation, with annual ongoing costs of EUR 50,000 to EUR 150,000.
Monthly portfolio status reviews, quarterly resource reviews, and annual strategic reviews, sitting above individual system governance to coordinate shared resources.
Brownfield compliance does not end at the point of achieving initial conformity. The system's lifecycle extends through operational monitoring and eventually into end-of-life. Organisations should treat the end-of-life process as the final phase of the delivery lifecycle, governed by the same discipline of planning, execution, documentation, and review that applies to every earlier phase. The end-of-life plan serves as the equivalent of the deployment plan, and the decommissioning record serves as the equivalent of the deployment evidence pack.