We use cookies to improve your experience and analyse site traffic.
Article 43(2) provides that for most high-risk AI systems classified under Annex III, conformity assessment is an internal procedure based on Annex VI. This section covers the assessment structure, documentation standards, assessor independence, organisational roles, notified body engagement, non-conformity management, and continuous assessment.
Article 43(2) provides that for the majority of high-risk AI systems classified under Annex III, the default conformity assessment procedure is an internal one based on Annex VI.
Article 43(2) provides that for the majority of high-risk AI systems classified under Annex III, the default conformity assessment procedure is an internal one based on Annex VI. Compliance is self-assessed. This is not a lighter obligation than third-party assessment; it is a different kind of obligation that places full responsibility for the rigour, completeness, and honesty of the assessment on the provider.
The internal assessment verifies that the technical documentation required by Article 11 satisfies the Annex IV specification and that the system complies with all Chapter 2 requirements, as Article 8 mandates. A competent authority or notified body reviewing the results will expect the same evidentiary standard regardless of who conducted the assessment. Organisations that treat internal conformity assessment as a cursory self-certification exercise are exposed to significant risk.
Market surveillance authorities have the power under Article 74 to request full technical documentation, to conduct evaluations, and to require corrective actions. An internal assessment that cannot withstand scrutiny is worse than no assessment at all, because it carries the provider's signed Declaration of Conformity, which is a legally binding statement. The aisdp is the evidentiary backbone that the assessment evaluates; without a rigorous assessment process, even a well-constructed AISDP provides no compliance assurance.
For biometric identification systems under Annex III point 1, Article 43(1) requires third-party assessment by a notified body. Providers of other high-risk systems may voluntarily seek notified body involvement to strengthen credibility or satisfy deployer contractual requirements.
Annex VI requires the provider to verify that the quality management system complies with Article 17, to examine the technical documentation to assess whether the system complies with Articles 8 through 15, and to verify that the design and development process and post-market monitoring are consistent with the technical documentation.
Annex VI requires the provider to verify that the quality management system complies with Article 17, to examine the technical documentation to assess whether the system complies with Articles 8 through 15, and to verify that the design and development process and post-market monitoring are consistent with the technical documentation. In practice, this translates into three distinct assessment workstreams.
The quality management system assessment verifies that each element required by Article 17 exists, is documented, and is operational. Article 17 requires a compliance strategy, techniques and procedures for design and development, quality control procedures, data management procedures, record-keeping obligations, a resource management framework, an accountability framework, conformity assessment procedures, a corrective action process, and a post-market monitoring plan. "Exists" means a written policy or procedure. "Documented" means the documentation is current, version-controlled, and accessible. "Operational" means the procedure is actually followed in practice, with evidence of execution such as audit trails, sign-off records, and training records.
The technical documentation assessment examines the AISDP against Articles 8 through 15 and Annex IV. This is a technical verification exercise, not a document review in the literary sense. The assessor must confirm that documentation accurately describes the system as built and deployed, that claims are supported by evidence, and that evidence is sufficient, authentic, and current. For each AISDP module, four dimensions are verified: content completeness, evidence sufficiency, consistency with other modules and the deployed system, and currency reflecting the system's current state.
The three workstreams are conceptually distinct but overlap in practice.
The three workstreams are conceptually distinct but overlap in practice. A structured methodology sequences the assessment efficiently and catches inconsistencies early, proceeding through five phases.
Phase 1 is a desktop review spanning two to five days. The assessor reads the complete AISDP and supporting documentation without accessing the live system, identifying gaps, internal inconsistencies, and missing evidence references. This produces an initial findings list and clarification questions for the Technical SME and Business Owner.
Phase 2 is evidence verification, taking three to eight days. The assessor works through the evidence register, verifying each referenced artefact exists, is accessible, is the correct version, and supports the claim it is cited for. This produces a verified evidence log and Non-Conformity Register entries for missing, expired, or insufficient evidence.
Phase 3 is live system verification over two to four days. The assessor examines the deployed system to confirm behaviour matches documentation, covering production configuration, the human oversight interface, post-market monitoring reports, alert logs, monitoring dashboards, and logging infrastructure.
Phase 4 comprises stakeholder interviews over one to three days. The assessor interviews the Technical SME on architecture, testing, and deployment; the Business Owner on intended purpose, oversight model, and escalation; and Operators on override capability and exercise criteria. Interview records and training gap findings are documented.
Phase 5 is synthesis and reporting over two to three days. Findings from all phases are consolidated, non-conformities classified by severity, and the overall assessment conclusion reached. The Assessment Report states one of three determinations: conformity demonstrated, conformity demonstrated subject to remediation, or conformity not demonstrated. Total assessment duration for a single system typically spans ten to twenty-three days.
The internal assessment produces its own documentation distinct from the AISDP itself.
The internal assessment produces its own documentation distinct from the AISDP itself. This assessment documentation demonstrates that the assessment was conducted rigorously and that the Declaration of Conformity is justified.
The Assessment Plan is prepared before the assessment begins. It defines scope, assessment methodology, assessor team composition and qualifications, the assessment schedule, and acceptance criteria. The AI Governance Lead approves the plan before assessment commences.
The Assessment Checklist maps every requirement of Articles 8 through 15, Article 17, and Annex IV to specific questions, evidence expectations, and pass/fail criteria. The checklist must be granular; a single item such as "Article 10 compliance" is insufficient. Each sub-requirement of Article 10, covering relevance, representativeness, freedom from errors, completeness, statistical properties, bias detection measures, and special category data processing, should be a separate checklist item.
The Evidence Register catalogues every artefact reviewed during assessment, with its location, version, date, and the assessment finding it supports. It bridges assessment findings and underlying proof.
The Non-Conformity Register records every identified gap or inconsistency with a unique identifier, severity classification (critical, major, minor), description, affected AISDP module and Article, required remediation, responsible person, deadline, and verification method.
The Assessment Report is the formal output. It summarises scope, methodology, findings, non-conformities, and the overall conclusion. It is signed by the lead assessor and reviewed by the AI Governance Lead.
The AI Act does not require external assessors for internal CONFORMITY ASSESSMENT, but it does require that the assessment be credible.
The AI Act does not require external assessors for internal conformity assessment, but it does require that the assessment be credible. Organisations should ensure that the assessor team has no direct involvement in the system's development to avoid self-review bias, that assessors have the technical competence to evaluate the system, and that assessors have regulatory competence understanding the Act's requirements.
The assessor competence framework covers four areas. Regulatory knowledge requires a working understanding of Articles 8 through 15, Article 17, Annex IV, and Annex VI, plus the interaction between the AI Act and related regulations such as GDPR and NIS2. Technical knowledge requires the ability to read and evaluate technical documentation for the AI system types being assessed, including understanding model architectures, training methodologies, evaluation metrics, fairness measures, and data governance practices at a level sufficient to verify documentation accuracy.
Audit methodology competence requires training in structured assessment methodology: evidence collection, sampling, verification, non-conformity classification, and reporting. Experience with ISO 19011 guidelines for auditing management systems provides a solid foundation.
Continuing professional development keeps regulatory and technical knowledge current as harmonised standards are developed, AI Office guidance is published, and enforcement practice emerges. A minimum of 20 hours per year of AI Act-relevant CPD is a reasonable expectation.
Certification requires coordinated effort across multiple functions, with clearly defined roles and responsibilities.
Certification requires coordinated effort across multiple functions, with clearly defined roles and responsibilities. The organisational model scales with the number of AI systems the organisation operates.
The AI Governance Lead holds ultimate accountability. This role reviews and approves the AISDP, signs the Declaration of Conformity, manages relationships with competent authorities, and has authority to compel remediation, halt deployment, and allocate resources. The AI System Assessor handles discovery, classification, risk assessment, and AISDP compilation for each system, combining regulatory and technical understanding.
The Technical SME provides engineering evidence including architecture documentation, model evaluation results, data governance artefacts, and testing reports. The Business Owner ensures intended purpose, deployment context, and human oversight measures are correctly documented and operational. The Conformity Assessment Coordinator manages the end-to-end certification workflow, coordinates the assessment plan, manages the Non-Conformity Register, prepares the Declaration of Conformity for signature, and manages EU database registration.
The Legal and Regulatory Advisor reviews evidence for legal sufficiency and advises on interpretation of novel requirements. The DPO Liaison confirms data governance documentation is consistent with GDPR obligations. The Internal Audit Assurance Lead provides independent verification that the certification process was followed correctly.
Small organisations with five to ten AI systems typically combine roles, with the Conformity Assessment Coordinator doubling as the Assessor and technical support drawn part-time from engineering. Medium organisations with ten to thirty systems require a dedicated AI Governance team with two to four assessors and a dedicated Coordinator. Large enterprises with thirty or more systems operate a full AI Compliance Office with multiple assessors organised by business domain, parallel certification tracks, and embedded legal and audit functions.
Article 43(1) requires third-party CONFORMITY ASSESSMENT by a notified body for high-risk AI systems used for biometric identification under Annex III point 1.
Article 43(1) requires third-party conformity assessment by a notified body for high-risk AI systems used for biometric identification under Annex III point 1. The designation of notified bodies under Article 28 is proceeding gradually, and as of early 2026 only a small number have been formally designated. Organisations should monitor the NANDO database for AI Act-designated bodies and engage early with prospective bodies.
Notified body assessment is more intensive than internal assessment. The body conducts its own independent evaluation of the QMS and technical documentation and may require access to source code, training infrastructure, and testing environments. The AISDP and evidence pack should be complete and internally consistent before engagement. Key personnel must be available for interviews during the assessment period.
The assessment generally proceeds in two phases. The desktop review examines QMS documentation and the AISDP, producing an interim findings report. The technical review may include on-site or remote inspection, personnel interviews, and live system demonstrations. Documentation expectations are materially more demanding: a notified body expects evidence extracted, contextualised, and presented as self-standing artefacts with narrative summaries, traceability matrices, and methodology descriptions.
Organisations should establish a formal interaction protocol covering single points of contact, encrypted communication channels, access scope, confidentiality arrangements, and dispute resolution. The Conformity Assessment Coordinator maintains a formal interaction log recording every substantive communication, which serves as evidence of cooperative engagement under Article 99(7).
The Non-Conformity Register is a compliance artefact in its own right, demonstrating the organisation's ability to identify, classify, and resolve gaps, which is itself a QMS requirement under Article 17.
The Non-Conformity Register is a compliance artefact in its own right, demonstrating the organisation's ability to identify, classify, and resolve gaps, which is itself a QMS requirement under Article 17. The Conformity Assessment Coordinator classifies non-conformities into three severity levels with distinct remediation expectations.
Critical non-conformities indicate a fundamental failure that could result in serious harm, a violation of fundamental rights, or a material misstatement in the Declaration of Conformity. The system cannot be placed on the market until resolved. Remediation must begin immediately and be verified by the assessor. Examples include a complete absence of human oversight capability, fabricated evidence, or a fundamental rights impact assessment that was never conducted.
Major non-conformities indicate a significant gap that weakens compliance posture but does not present immediate serious harm risk. The system may proceed to market with a documented remediation plan and a deadline of typically 30 to 90 days. Examples include fairness testing that omits a relevant protected characteristic, a PMM plan without alerting thresholds, or cybersecurity testing conducted more than eighteen months ago.
Minor non-conformities are documentation deficiencies or inconsistencies that do not affect substantive compliance, with remediation deadlines of up to six months. Examples include typographical errors, incorrect cross-references, or minor version discrepancies.
Each non-conformity follows a defined workflow: identification and logging by the assessor, assignment to the responsible person by the Coordinator, root cause analysis to address the underlying cause rather than the symptom, remediation action, documented evidence of remediation, verification by the assessor confirming effectiveness, and closure with date and verification evidence. Non-conformities that remain open beyond their deadline require escalation to the AI Governance Lead with documented justification and a revised timeline.
Organisations with multiple high-risk AI systems face a coordination challenge.
Organisations with multiple high-risk AI systems face a coordination challenge. Each system requires its own AISDP and conformity assessment, but many underlying compliance elements are shared: the QMS, the data governance framework, the cybersecurity infrastructure, the training programme, and the organisational structure. Assessing each system in isolation leads to duplicated effort and inconsistent findings.
A shared evidence strategy allows artefacts that apply to multiple systems, such as QMS documentation, organisational policies, infrastructure security configurations, and training records, to be assessed once and referenced by each system's assessment. The evidence register for each system distinguishes between system-specific and shared evidence with clear version references.
A staggered assessment calendar distributes assessor workload across the year. A quarterly rolling schedule where a subset of systems is assessed each quarter ensures the organisation is continuously engaged in compliance verification and avoids the pressure of an annual compliance sprint.
Cross-system findings analysis identifies systemic weaknesses. The AI Governance Lead reviews the aggregate Non-Conformity Register across all systems at least quarterly, looking for patterns that suggest organisational gaps, such as recurring training deficiencies, common documentation omissions, or persistent evidence currency issues, rather than isolated system-specific problems. These patterns trigger improvements to shared processes and infrastructure rather than repeated individual remediation.
The initial CONFORMITY ASSESSMENT establishes that the system meets requirements at a point in time.
The initial conformity assessment establishes that the system meets requirements at a point in time. The ongoing obligations under Articles 9, 18, and 72 require a more sustained approach. Organisations should supplement formal assessment with a continuous assessment programme operating on three cadences.
Monthly automated checks verify technical compliance: monitoring systems are operational, all evidence artefacts are current, PMM metric thresholds have not been breached, and all non-conformities are within remediation deadlines. Engineering teams automate these checks using scheduled scripts that query monitoring infrastructure, evidence repositories, and the non-conformity register, producing structured reports.
Quarterly governance reviews bring the AI Governance Lead, technical leads, and the DPO Liaison together to review monthly reports, assess overall compliance posture, review the risk register, and make governance decisions. These reviews provide the human judgement layer that automated checks cannot replace.
Annual formal reassessment repeats the full Annex VI assessment process, examining every AISDP module against the current system state. This is the formal compliance renewal that maintains the validity of the Declaration of Conformity.
Certain events should trigger unscheduled assessment regardless of the calendar: a substantial modification to the system, a serious incident, a regulatory enforcement action, new harmonised standards affecting the system's compliance posture, or a material change in deployment context such as new member states or new use cases. The ISO 42001 framework for AI management systems provides a structured foundation for continuous assessment that aligns well with the Act's requirements.
Yes. Article 43(2) provides that for most Annex III high-risk systems, internal assessment under Annex VI is the default. The provider bears full responsibility for assessment rigour. Only biometric identification systems require mandatory third-party assessment.
Critical non-conformities block market placement until resolved. Major non-conformities allow continued operation with a 30-90 day remediation plan. Minor non-conformities have up to six months for remediation. All follow a workflow from identification through root cause analysis to verified closure.
Use a shared evidence strategy for common artefacts like QMS documentation, stagger assessments on a quarterly rolling calendar, and conduct cross-system findings analysis to identify systemic weaknesses rather than treating each system in isolation.
A five-phase assessment for a single system typically spans ten to twenty-three days, covering desktop review, evidence verification, live system verification, stakeholder interviews, and synthesis.
Assessment Plan, Assessment Checklist, Evidence Register, Non-Conformity Register, Assessment Report, and the Declaration of Conformity under Annex V.
Three severity levels: critical (blocks market placement), major (30-90 day remediation), and minor (up to 6 months). Each follows a workflow from identification through root cause analysis to verified closure.
Article 43(1) mandates third-party assessment for biometric identification systems under Annex III point 1. Other providers may voluntarily seek notified body involvement. The total assessment timeline typically spans four to eight months.
The third workstream verifies design, development, and post-market monitoring consistency. The assessor traces from the AISDP to source artefacts. If the AISDP states the model was trained on Dataset v3.2 using specific hyperparameters, the assessor must verify this against the model registry and training pipeline logs. If the AISDP states monthly fairness metric computation, the assessor verifies the monitoring system is producing and reviewing these reports.
The Declaration of Conformity under Annex V is the legally binding statement that the system conforms to the Act's requirements. Signed by the AI Governance Lead or authorised representative, it contains the information specified in Article 47 and Annex V and must be retained for ten years after the system is placed on the market. A pre-assessment readiness review, conducted by the Conformity Assessment Coordinator before committing to formal assessment, verifies that documentation, evidence, testing, and monitoring are mature enough to undergo assessment.
For organisations with a dedicated internal audit function, the Internal Audit Assurance Lead can provide an additional layer of independent verification. For smaller organisations, external consultants or peer review arrangements with other organisations can supplement internal assessor independence.
Timeline planning should account for four to eight weeks of pre-engagement, four to twelve weeks of desktop review, two to eight weeks of gap remediation, two to four weeks of technical assessment, and two to four weeks of final reporting. For mandatory assessments, the total typically spans four to eight months. Notified body certification is not permanent; Article 44 provides for periodic reassessment, and substantial modifications may trigger supplementary assessment.
For AI systems that are safety components of Annex I products, the conformity assessment landscape involves coordinating between product notified bodies and AI Act notified bodies. Three models are emerging: single-body (one body designated under both frameworks), sequential (product assessment first, then AI), and parallel (concurrent assessments with coordination protocol). Regulator interaction covers the broader engagement strategy with competent authorities.
For organisations using manual processes, assessment documentation can be maintained with templates covering scope, methodology, schedule, and qualifications. Assessment checklists map Article-by-Article requirements to evidence sources. Spreadsheet-based evidence registers and non-conformity registers are adequate for organisations with a small number of AI systems, though they scale poorly. A quality management system fundamentally requires documented procedures, not a specific software platform; ISO 42001 certification can be achieved with paper-based or spreadsheet-based documentation where necessary.