We use cookies to improve your experience and analyse site traffic.
The Non-Conformity Register is a compliance artefact in its own right under Article 17, demonstrating the organisation's ability to identify, classify, and resolve assessment gaps. This page covers severity classification, the seven-stage remediation workflow, and coordination strategies for organisations operating multiple high-risk AI systems.
The Non-Conformity Register is a compliance artefact in its own right, not merely an intermediate working document.
The Non-Conformity Register is a compliance artefact in its own right, not merely an intermediate working document. It demonstrates the organisation's ability to identify, classify, and resolve gaps in its conformity assessment processes. This capability is itself a quality management system requirement under article 17. Organisations that treat non-conformity tracking as administrative overhead rather than a substantive compliance obligation risk undermining their entire assessment posture. The register provides auditable evidence that the organisation maintains continuous awareness of its compliance state and takes structured action to address shortfalls.
The Conformity Assessment Coordinator classifies each non-conformity into one of three severity levels, each carrying distinct remediation expectations and timelines.
The Conformity Assessment Coordinator classifies each non-conformity into one of three severity levels, each carrying distinct remediation expectations and timelines. The classification determines whether a high risk ai system may continue in service, proceed to market with conditions, or requires only documentation correction. Getting the severity classification right is essential because it directly affects market access and operational continuity. Conformity Assessment for High-Risk AI Systems
Critical. A critical non-conformity indicates a fundamental failure to meet a requirement that could result in serious harm, a violation of fundamental rights, or a material misstatement in the declaration of conformity. The system cannot be placed on the market or continue in service until the non-conformity is resolved. Remediation must begin immediately and be verified by the assessor before the assessment can conclude. Examples include a complete absence of human oversight capability for a system requiring it, fabricated or falsified evidence, or a fundamental rights impact assessment that was never conducted.
Major. A major non-conformity indicates a significant gap that weakens the compliance posture but does not present an immediate risk of serious harm. The system may proceed to market with a documented remediation plan and a defined deadline, typically 30 to 90 days. Remediation must be verified by the assessor, and the AISDP must be updated to reflect the corrected state. Examples include fairness testing that omits a relevant protected characteristic, a post-market monitoring plan that defines metrics but has no alerting thresholds, or cybersecurity testing that was last conducted more than eighteen months ago.
Minor. A minor non-conformity is a documentation deficiency or a minor inconsistency that does not affect the system's substantive compliance. Remediation is recorded and tracked, with a deadline of up to six months. Examples include typographical errors in the AISDP, a cross-reference that points to the wrong evidence artefact, or a minor version discrepancy between the AISDP and the evidence register.
Each non-conformity follows a defined seven-stage workflow from identification through to closure.
Each non-conformity follows a defined seven-stage workflow from identification through to closure. The workflow ensures that remediation addresses the underlying cause of the gap, not merely its visible symptom. Incident Response and Recovery
The stages proceed as follows. First, the assessor identifies and logs the non-conformity in the register. Second, the Conformity Assessment Coordinator assigns it to the responsible person within the organisation. Third, that person conducts a root cause analysis to understand why the gap arose. Fourth, the responsible person implements the remediation action. Fifth, documented proof is produced showing that the fix has been applied. Sixth, the assessor verifies that the remediation is effective and complete. Finally, the non-conformity is marked as resolved in the register, with the closure date and verification evidence recorded.
The Conformity Assessment Coordinator reviews the register at every assessment cycle. Non-conformities that remain open beyond their deadline require escalation to the AI Governance Lead. The escalation must include a documented justification for the delay and a revised timeline for resolution.
Organisations operating multiple high-risk AI systems face a coordination challenge that single-system operators do not encounter.
Organisations operating multiple high-risk AI systems face a coordination challenge that single-system operators do not encounter. Each system requires its own AISDP and its own conformity assessment, but many underlying compliance elements are shared: the quality management system, the data governance framework, the cybersecurity infrastructure, the training programme, and the organisational structure. Assessing each system in isolation leads to duplicated effort and inconsistent findings. Market Surveillance and Regulatory Reporting
A shared evidence strategy prevents duplication by assessing common artefacts once and referencing them across multiple system assessments.
A shared evidence strategy prevents duplication by assessing common artefacts once and referencing them across multiple system assessments. Evidence artefacts that apply to multiple systems, such as quality management system documentation, organisational policies, infrastructure security configurations, and training records, are assessed once by the Conformity Assessment Coordinator. Each system's assessment then references those shared artefacts rather than reassessing them independently. The evidence register for each system should distinguish between system-specific evidence and shared evidence, with clear version references to the shared artefacts.
A staggered assessment calendar distributes the assessment workload across the year rather than concentrating it in a single period.
A staggered assessment calendar distributes the assessment workload across the year rather than concentrating it in a single period. Organisations should stagger their assessment cycles across systems to avoid resource bottlenecks. A quarterly rolling schedule, where a subset of systems is assessed each quarter, distributes the assessor workload evenly. This approach ensures that the organisation is continuously engaged in compliance verification and avoids the pressure of an annual compliance sprint.
Cross-system findings analysis reveals systemic weaknesses that individual system assessments may miss.
Cross-system findings analysis reveals systemic weaknesses that individual system assessments may miss. Non-conformities identified in one system's assessment may indicate organisational gaps that affect other systems. The AI Governance Lead should review the aggregate Non-Conformity Register across all systems at least quarterly. The review should look for patterns that suggest organisational gaps, such as recurring training deficiencies, common documentation omissions, or persistent evidence currency issues, rather than isolated system-specific problems.
The system cannot be placed on the market or continue in service. Remediation must begin immediately and be verified by the assessor before the assessment can conclude.
Typically 30 to 90 days. The system may proceed to market with a documented remediation plan, but the fix must be verified by the assessor and the AISDP updated.
It must be escalated to the AI Governance Lead with a documented justification for the delay and a revised timeline for resolution.
By using shared evidence strategies, staggered assessment calendars, and cross-system findings analysis to avoid duplication and inconsistency.
Assessing common artefacts once and referencing them across multiple system assessments, distinguishing shared from system-specific evidence.
To reveal systemic organisational weaknesses such as recurring training deficiencies or documentation omissions that individual assessments miss.