We use cookies to improve your experience and analyse site traffic.
Eleven recurring patterns undermine AISDP credibility, increase enforcement risk, and waste resources across organisations of all sizes. These pitfalls cluster into documentation failures, design failures, and operational failures. Understanding them before beginning compliance work prevents the most expensive rework.
Eleven recurring patterns undermine AISDP credibility, increase enforcement risk, and waste resources across organisations of all sizes and sectors.
Eleven recurring patterns undermine AISDP credibility, increase enforcement risk, and waste resources across organisations of all sizes and sectors. Each pitfall is a systemic failure mode, not an isolated mistake. Understanding them before beginning compliance work prevents the most expensive rework.
The pitfalls cluster into three categories: documentation failures (retrospective documentation, legal document syndrome, the empty evidence pack), design failures (human oversight as checkbox, cybersecurity as afterthought, oversight designed after deployment), and operational failures (compliance-at-deployment, suppressed escalation, scope creep without reclassification, ignoring cumulative change, decommissioning as afterthought).
Attempting to reconstruct the development process from memory after the system is built produces documentation that is inaccurate, incomplete, and obviously post-hoc to any experienced reviewer.
Attempting to reconstruct the development process from memory after the system is built produces documentation that is inaccurate, incomplete, and obviously post-hoc to any experienced reviewer. A notified body or competent authority will compare the AISDP's claims against the evidence pack and the deployed system. Gaps between narrative and evidence are the first thing assessors look for.
The solution is to generate documentation as a byproduct of the engineering workflow, not as a separate activity performed after the fact. The CI/CD pipeline described in CI/CD Pipeline for AI Compliance automates evidence generation at every stage. Closely related is legal document syndrome: treating the AISDP as a legal document (vague, carefully hedged, written to minimise exposure) when it should be a technically precise record. Hedging and vagueness are not protective; they invite deeper scrutiny.
The empty evidence pack is the third documentation pitfall. An AISDP narrative without supporting evidence is a narrative without proof. Every material claim must trace to a specific, retrievable artefact. If the evidence register has gaps, the assessor flags them as non-conformities.
Documenting that human oversight exists without designing the operational reality is one of the most consequential compliance failures.
Documenting that human oversight exists without designing the operational reality is one of the most consequential compliance failures. Article 14 requires operational design, not a policy statement. The oversight interface, training programme, override capability, workload management, automation bias countermeasures, and escalation pathways must all be designed, tested, and validated before the system begins making decisions that affect real people.
A related failure is oversight designed after deployment: building the operational oversight framework after the system is live. Operators need training, interfaces need testing, escalation pathways need rehearsal, and break glass procedures need validation before go-live. Operational Oversight and Human Control provides the complete oversight architecture.
Suppressed escalation is the cultural failure mode that undermines even well-designed oversight. Creating formal escalation pathways while cultivating a culture where using them carries implicit or explicit career risk suppresses the information the organisation needs most. The non-retaliation commitment must be a lived organisational value, not merely a policy document.
**Compliance-at-deployment** treats compliance as a gate to pass once, overlooking the ongoing obligation.
Compliance-at-deployment treats compliance as a gate to pass once, overlooking the ongoing obligation. The AISDP is a living document, the post-market monitoring system under Article 72 must operate continuously, and the AI System Assessor updates the risk register as new risks emerge. Compliance is a continuous process, not an event.
Cybersecurity as afterthought is equally damaging. Bolting security on as a final pre-deployment gate costs more to remediate than embedding it from the outset through DevSecOps practices. Retroactive security assessments find more problems and delay deployment. Cybersecurity for AI Systems covers the embedded security approach.
Decommissioning as afterthought treats system end-of-life as an operational task with no governed compliance process. A system shut down without a structured end-of-life plan risks orphaned personal data violating GDPR retention limits, unrevoked credentials creating security vulnerabilities, deployers left without transition support, and a ten-year documentation retention obligation with no one assigned to manage it. The end-of-life process should be planned during the architecture phase and documented in the AISDP from the outset.
Expanding a system's use beyond its documented purpose without a reclassification review is a direct breach of the AI Act.
Expanding a system's use beyond its documented purpose without a reclassification review is a direct breach of the AI Act. A system classified as non-high-risk under Article 6(3) that drifts beyond its intended purpose and no longer qualifies for the exception is operating outside the law, regardless of how capable the system is.
The closely related cumulative change problem involves making a series of individually sub-threshold changes that collectively constitute a substantial modification. Each change passes the automated quality gates, but the aggregate effect alters the system's behaviour significantly. Cumulative change tracking is the control for this pitfall.
Both failures share a root cause: the absence of systematic review triggers. Organisations need periodic reclassification reviews (not just at deployment), automated cumulative drift monitoring, and governance processes that flag when operational use diverges from documented purpose. Version Control and Change Management covers the cumulative change tracking mechanism, and Risk Classification Under the EU AI Act covers reclassification triggers.
Prevention requires embedding controls into the engineering and governance workflow rather than relying on periodic audits.
Prevention requires embedding controls into the engineering and governance workflow rather than relying on periodic audits. For documentation pitfalls, automate evidence generation through the CI/CD pipeline so that documentation is a byproduct of development, not a separate activity.
For oversight pitfalls, design the operational framework during the architecture phase and validate it before deployment. Include oversight interface testing, operator training, escalation rehearsals, and break-glass validation in the pre-deployment checklist. Conduct anonymous escalation culture surveys quarterly.
For operational pitfalls, treat the AISDP as a living document with scheduled reviews, deploy continuous post-market monitoring from day one, embed security through DevSecOps practices, and plan the end-of-life process during architecture design. For scope creep, implement automated cumulative change tracking and periodic reclassification reviews. The Compliance Maturity Model provides the progression framework for moving from reactive to embedded prevention.
Automate evidence generation through the CI/CD pipeline so documentation is a byproduct of the engineering workflow. Every build, test, and deployment stage should produce audit-grade records automatically. Never attempt to reconstruct development history after the fact.
A series of individually sub-threshold changes that collectively alter the system's behaviour significantly enough to constitute a substantial modification. Each change passes automated quality gates individually, but the aggregate effect triggers compliance obligations. Cumulative change tracking across all artefact types is the prevention control.
A system shut down without a structured end-of-life plan risks orphaned personal data violating GDPR limits, unrevoked credentials, deployers without transition support, and a ten-year retention obligation with no assigned owner. Planning during architecture ensures these risks are addressed by design.
Using a system beyond its documented intended purpose without reclassification review is a direct breach. A series of sub-threshold changes can also collectively constitute a substantial modification, even if each individual change passes quality gates.