We use cookies to improve your experience and analyse site traffic.
The EU AI Act requires credible conformity assessment for high-risk AI systems. Assessor independence and a defined competence framework ensure that internal assessments meet the standard a competent authority or notified body would expect.
The EU AI Act does not require external assessors for internal conformity assessment, but it does require that the assessment be credible.
The EU AI Act does not require external assessors for internal conformity assessment, but it does require that the assessment be credible. Credibility depends on the assessor team having no direct involvement in the system's development. This separation avoids self-review bias, where the people who built the system are also the people judging whether it meets regulatory requirements.
Assessors must bring two distinct types of competence to their role. They need technical competence to evaluate the system, covering model architecture, the data pipeline, fairness metrics, and cybersecurity measures. They also need regulatory competence: an understanding of the AI Act's requirements and how those requirements translate into technical evidence. Conformity Assessment for High-Risk AI provides the broader context for how internal assessment fits within the EU AI Act's compliance framework.
Organisations with a dedicated internal audit function can use the Internal Audit Assurance Lead role to provide an additional layer of independent verification over the conformity assessment process.
Organisations with a dedicated internal audit function can use the Internal Audit Assurance Lead role to provide an additional layer of independent verification over the conformity assessment process. This role sits outside the development and assessment teams, offering a check on both the evidence and the process used to evaluate it.
Smaller organisations that lack a dedicated audit function have alternative options. External consultants can be engaged to supplement internal assessor independence, bringing both fresh perspective and specialist knowledge. Peer review arrangements with other organisations offer another approach, allowing assessors from different companies to review each other's work. These compensating controls help organisations of any size achieve the credibility threshold the regulation demands. Compliance for SMEs covers additional strategies for resource-constrained organisations.
The credibility of an internal conformity assessment depends directly on the assessor's ability to interrogate evidence with the depth that a competent authority or NOTIFIED BODY would expect.
The credibility of an internal conformity assessment depends directly on the assessor's ability to interrogate evidence with the depth that a competent authority or notified body would expect. Organisations should define a competence framework specifying the knowledge, skills, and experience required for assessment roles. This framework ensures consistent assessment quality regardless of which individual conducts the evaluation.
The competence framework spans four domains: regulatory knowledge, technical knowledge, audit methodology, and continuing professional development. Each domain addresses a different dimension of the assessor's ability to evaluate whether an AI system genuinely meets the EU AI Act's requirements or merely appears to do so.
Assessors must have a working understanding of the AI Act's requirements for high-risk systems, covering Articles 8 through 15, Article 17, Annex IV, and Annex VI.
Assessors must have a working understanding of the AI Act's requirements for high-risk systems, covering Articles 8 through 15, Article 17, Annex IV, and Annex VI. They must understand the conformity assessment procedures and the Declaration of Conformity requirements. This regulatory foundation enables assessors to map each requirement to the specific evidence that satisfies it.
Beyond the AI Act itself, assessors should understand the interaction between the AI Act and related regulations. GDPR, NIS2, and sector-specific legislation each impose additional requirements that may intersect with AI Act obligations. Assessors need sufficient understanding of these cross-regulatory relationships to identify gaps where compliance with one framework does not automatically satisfy another.
Assessors must be able to read and evaluate technical documentation for the types of AI system they assess.
Assessors must be able to read and evaluate technical documentation for the types of AI system they assess. For ML-based systems, this includes understanding model architectures, training methodologies, evaluation metrics, fairness measures, and data governance practices. The depth required is sufficient to verify that the documentation accurately describes the system.
Assessors do not need the ability to build the system themselves. What they must be able to do is identify when technical claims are unsupported, inconsistent, or implausible. This distinction is important: the assessor's role is verification, not engineering. They examine whether the evidence presented genuinely demonstrates compliance, not whether the system could have been built differently. Technical Documentation Requirements details the documentation that assessors evaluate.
Assessors should complete training in structured assessment methodology covering evidence collection, sampling, verification, non-conformity classification, and reporting.
Assessors should complete training in structured assessment methodology covering evidence collection, sampling, verification, non-conformity classification, and reporting. Experience with ISO 19011, which provides guidelines for auditing management systems, or equivalent frameworks offers a solid foundation for this work.
The AI Act's regulatory landscape is evolving. Harmonised standards are being developed, guidance from the AI Office and national competent authorities is being published, and enforcement practice is emerging. Assessors should participate in continuing professional development that keeps their regulatory and technical knowledge current. A minimum of 20 hours per year of AI Act-relevant CPD is a reasonable baseline expectation for maintaining assessment quality.
Certification under the EU AI Act requires coordinated effort across multiple organisational functions, each contributing distinct expertise.
Certification under the EU AI Act requires coordinated effort across multiple organisational functions, each contributing distinct expertise. The AI System Assessor handles discovery, classification, risk assessment, and AISDP compilation for each system, examining it against the Article 3(1) definition, classifying within risk tiers, and performing gap assessment for brownfield systems. This role must combine regulatory and technical understanding.
The Technical SME provides engineering evidence: architecture documentation, model evaluation results, data governance artefacts, and testing reports. The Technical Owner, typically the engineering lead or CTO, ensures design, implementation, and testing satisfy Articles 9 through 15. The Business Owner ensures intended purpose, deployment context, and human oversight measures are correctly documented and operational.
The AI Governance Lead holds ultimate accountability, reviewing and approving the AISDP, signing the Declaration of Conformity, and managing relationships with competent authorities. This role carries authority to compel remediation, halt deployment, and allocate resources. The Conformity Assessment Coordinator manages the end-to-end certification workflow, including assessment planning, the Non-Conformity Register, Declaration of Conformity preparation, and EU database registration.
Supporting roles include the Legal and Regulatory Advisor, who reviews evidence for legal sufficiency and advises on interpretation of novel requirements; the DPO Liaison, who confirms data governance documentation is consistent with GDPR obligations and verifies DPIAs are complete; and the Internal Audit Assurance Lead, who provides independent verification that the certification process was followed correctly, evidence is complete and authentic, and no material deficiencies were overlooked.
No. The AI Act does not require external assessors for internal conformity assessment. However, smaller organisations may use external consultants or peer review arrangements to supplement internal assessor independence.
Assessors must be able to read and evaluate technical documentation, identify unsupported or implausible claims, and verify that documentation accurately describes the system. They do not need to be able to build the system themselves.
A minimum of 20 hours per year of AI Act-relevant CPD is a reasonable expectation, covering evolving harmonised standards, guidance from the AI Office, and emerging enforcement practice.
Assessors need a competence framework spanning regulatory knowledge, technical knowledge, audit methodology, and continuing professional development.
Assessors must understand Articles 8-15, Article 17, Annex IV, Annex VI, conformity assessment procedures, and cross-regulatory interactions with GDPR and NIS2.
Assessors must evaluate model architectures, training methodologies, evaluation metrics, fairness measures, and data governance practices to verify documentation accuracy.
Nine roles coordinate: AI System Assessor, Technical SME, Technical Owner, Business Owner, AI Governance Lead, Conformity Assessment Coordinator, Legal Advisor, DPO Liaison, and Internal Audit Assurance Lead.