We use cookies to improve your experience and analyse site traffic.
Conformity assessment is not a one-time exercise. Ongoing obligations under Articles 9, 18, and 72 require a continuous assessment model operating on monthly automated checks, quarterly governance reviews, and annual formal reassessment. Trigger events including substantial modifications and serious incidents require unscheduled assessment. Automated compliance checks and real-time dashboards maintain ongoing conformity between formal cycles.
The internal CONFORMITY ASSESSMENT described in Annex VI is often understood as a point-in-time exercise, but the ongoing obligations under Articles 9, 18, and 72 require a more sustained approach.
The internal conformity assessment described in Annex VI is often understood as a point-in-time exercise, but the ongoing obligations under Articles 9, 18, and 72 require a more sustained approach. The initial assessment establishes that the system meets requirements at a point in time; continuous assessment ensures that conformity is maintained as the system, its data, and its operating environment change.
Organisations should supplement the formal conformity assessment with a continuous assessment programme operating on three cadences. Monthly automated checks verify technical compliance: monitoring systems are operational, all evidence artefacts are current and not overdue for refresh, no PMM metric thresholds have been breached, and all non-conformities are within their remediation deadlines. The engineering team automates these checks using scheduled scripts querying the monitoring infrastructure, evidence repository, and non-conformity register, producing a structured report.
Quarterly governance reviews bring the AI Governance Lead, technical leads, and the DPO Liaison together to review the monthly reports, assess the overall compliance posture, review the risk register, and make governance decisions. Annual formal reassessment repeats the full Annex VI conformity assessment process, examining every aisdp module against the current system state.
Trigger-based assessment supplements the calendar cadence. Certain events should trigger an unscheduled assessment regardless of the calendar: a substantial modification to the system, a serious incident, a regulatory enforcement action against the organisation, new harmonised standards or guidance affecting the system's compliance posture, or a material change in the deployment context such as new member states, new deployers, or new use cases.
Assessment documentation is structured by the Conformity Assessment Coordinator around the AISDP module framework, with each module mapped to its corresponding Article and Annex IV requirements.
Assessment documentation is structured by the Conformity Assessment Coordinator around the AISDP module framework, with each module mapped to its corresponding Article and Annex IV requirements. For each requirement, the assessment records the evidence demonstrating compliance referenced by version and location in the evidence register, the assessor's determination as conformant, non-conformant, or partially conformant with explanation, and any conditions or recommendations.
The Conformity Assessment Coordinator classifies non-conformities identified during assessment by severity and tracks them to resolution. A critical non-conformity means the system cannot be placed on the market until it is resolved. A major non-conformity allows continued operation under a defined remediation timeline. A minor non-conformity is an improvement opportunity. Each non-conformity should have a root cause analysis, a corrective action plan, an assigned owner, a deadline, and a verification step confirming the corrective action was effective.
Credo AI provides this mapping as a built-in feature with Article-level checklists that can be populated with evidence references. For organisations not using Credo AI, a structured Confluence or SharePoint space with pre-built templates achieves the same result with more manual effort. Jira with a custom non-conformity issue type provides the workflow management, and the non-conformity register is retained as assessment evidence.
Manual assessment is feasible for a small number of simple systems but does not scale.
Manual assessment is feasible for a small number of simple systems but does not scale. Organisations with larger AI portfolios should invest in tooling that supports structured assessment. Compliance management platforms host the assessment checklist, track non-conformities, manage evidence registers, and generate assessment reports while maintaining an audit trail of the assessment process itself.
Commercial platforms in the GRC space include OneTrust, ServiceNow GRC, Archer, and IBM OpenPages. AI-specific compliance platforms including Credo AI, Holistic AI, and Monitaur are purpose-built for AI governance. Credo AI provides the strongest regulatory mapping capability, mapping compliance controls directly to EU AI Act articles and generating evidence gap reports. Holistic AI provides integrated bias detection, explainability assessment, and compliance workflow. Monitaur focuses on model performance monitoring with governance overlay.
The key platform requirements are structured checklist management with Article-level traceability, non-conformity tracking with severity classification and remediation workflow, evidence register with metadata tagging and expiry monitoring, assessment report generation with AISDP module mapping, and role-based access control with audit trail.
A centralised, version-controlled evidence repository with metadata tagging enables efficient evidence retrieval during assessment and automated freshness monitoring between assessments. Git-based repositories are suitable for code and configuration evidence. Document management systems handle narrative documentation and governance records. Object storage with lifecycle policies handles large binary artefacts including model files and dataset snapshots. The evidence repository should enforce immutability for submitted evidence and retain it for the full ten-year retention period.
Integration between the compliance management platform and the technical infrastructure can automate many routine verification tasks.
Integration between the compliance management platform and the technical infrastructure can automate many routine verification tasks. The platform can automatically verify that the model version in production matches the version documented in the AISDP, that PMM metrics are being computed at the documented frequency, and that no critical vulnerability scan findings remain unresolved beyond their deadline.
Open Policy Agent can codify compliance rules as policies evaluated automatically against the system's live state. Custom integrations using the APIs of the model registry, CI/CD platform, and monitoring infrastructure enable real-time compliance dashboards that surface drift between the documented state and the actual state. An OPA policy can verify AISDP completeness by checking all twelve modules are present, that all modules have been reviewed within 90 days, that the production model matches the documented version, that PMM reports are generated at the documented frequency, and that no critical vulnerabilities remain unresolved beyond the seven-day deadline.
Assessment workflow tooling reduces administrative overhead and ensures procedural steps are not inadvertently skipped. Workflow platforms can enforce the assessment sequence, route non-conformities to correct owners, track remediation deadlines, and generate the assessment report once all checklist items are resolved.
For smaller organisations or those with a single high-risk system, a lightweight approach may be appropriate: a Confluence or SharePoint space with structured templates, a Jira project for non-conformity tracking, and scheduled scripts for automated compliance checks. This approach has lower licensing cost but higher manual effort and greater risk of process breakdown as the portfolio scales. For larger portfolios, the standardisation and automation that AI-specific platforms provide reduces the per-system compliance overhead.
Annually at minimum, with monthly automated checks and quarterly governance reviews maintaining conformity between formal cycles. Trigger events require immediate unscheduled assessment.
No. Automated checks maintain ongoing technical compliance between assessments but cannot replace the human judgement, stakeholder interviews, and synthesis of a formal Annex VI assessment.
Credo AI provides Article-level checklists as a built-in feature. Alternatives include structured Confluence or SharePoint spaces with templates and Jira for non-conformity tracking. Spreadsheets work for one to three systems.