We use cookies to improve your experience and analyse site traffic.
AI systems that are safety components of Annex I products must satisfy both AI Act and product legislation requirements. The AISDP and product technical file are maintained as separate, cross-referenced documents with a traceability matrix. Risk assessment must account for the physical safety context, and CE marking covers both regimes under Article 48. An extended deadline of August 2027 applies to Chapter 2 requirements.
Article 43(1) requires third-party CONFORMITY ASSESSMENT by a notified body for high-risk AI systems used for biometric identification under Annex III point 1, unless those systems are intended for use in law enforcement.
Article 43(1) requires third-party conformity assessment by a notified body for high-risk AI systems used for biometric identification under Annex III point 1, unless those systems are intended for use in law enforcement. Providers of other high-risk systems may voluntarily seek notified body involvement to strengthen the credibility of their assessment or to satisfy deployer contractual requirements.
The designation of notified bodies under the AI Act is proceeding gradually. Article 28 requires member states to designate notifying authorities, and Article 29 sets out the criteria for designation including independence, technical competence, impartiality, and adequate resources. As of early 2026, only a small number of bodies have been formally designated, and several member states have not yet established their notifying authority. Organisations should monitor the NANDO database for AI Act-designated bodies and engage early with prospective bodies to understand their assessment methodology, timeline, and fee structure.
Notified body assessment is more intensive than internal assessment.
Notified body assessment is more intensive than internal assessment. The body conducts its own independent evaluation of the QMS and technical documentation and may require access to the system's source code, training infrastructure, and testing environments. The aisdp and evidence pack should be complete and internally consistent before engagement. Technical environments including development, staging, and production should be accessible for demonstration and inspection. Key personnel should be available for interviews during the assessment period. The document management system should provide controlled access to relevant artefacts without exposing unrelated intellectual property.
The assessment generally proceeds in two phases. The desktop review examines QMS documentation and the AISDP against Articles 8 through 15, producing an interim findings report. The technical review may include on-site or remote inspection, interviews with key personnel, and live demonstrations.
Documentation expectations are materially more demanding than for internal assessment. Where internal assessment might accept a cross-reference to a test result in MLflow, a notified body expects the result extracted, contextualised, and presented as a self-standing evidence artefact. The AI System Assessor should prepare a notified body evidence pack supplementing the standard register with narrative summaries, a traceability matrix mapping every AISDP claim to supporting evidence, test result summaries including methodology and statistical significance, and an organisation-specific glossary.
Notified body certification is not permanent. Article 44 provides for periodic reassessment, and unannounced audits may occur. Substantial modifications may trigger supplementary assessment. Organisations must maintain the same documentation discipline after certification as during the initial assessment.
A formal interaction protocol should be established before assessment begins, covering single points of contact on each side, encrypted communication channels for document exchange, the scope of access required, confidentiality arrangements for proprietary model architectures, and the dispute resolution procedure.
A formal interaction protocol should be established before assessment begins, covering single points of contact on each side, encrypted communication channels for document exchange, the scope of access required, confidentiality arrangements for proprietary model architectures, and the dispute resolution procedure.
The Conformity Assessment Coordinator maintains a formal interaction log recording every substantive communication: meeting minutes, document submissions, questions raised, responses provided, and interim findings received. This log serves as evidence of cooperative engagement, which is a mitigating factor under Article 99(7) if compliance issues arise later.
During assessment, the body may issue Requests for Information or Requests for Evidence. The Conformity Assessment Coordinator establishes an internal SLA of five business days for routine queries and two days for urgent queries. Delayed responses extend the timeline and may signal inadequate coordination.
Fees vary by body, system complexity, and scope.
Fees vary by body, system complexity, and scope. Common models include fixed fees for defined scope, time-and-materials, and hybrid models with a fixed base for desktop review and time-and-materials for technical assessment. The AI Governance Lead budgets for assessment as a distinct line item separate from internal assessment costs.
Timeline planning accounts for the full lifecycle. Pre-engagement including body selection, scope agreement, and contract negotiation requires four to eight weeks. The desktop review requires four to twelve weeks. Gap remediation adds two to eight weeks. The technical assessment requires two to four weeks of active engagement. Final reporting and certification adds two to four weeks. For mandatory assessments, the total typically spans four to eight months. Organisations should begin engagement at least nine months before the target deployment date. Multiple rounds of review and remediation are common.
For AI systems that are safety components of products covered by Annex I harmonisation legislation, the conformity assessment landscape is more complex.
For AI systems that are safety components of products covered by Annex I harmonisation legislation, the conformity assessment landscape is more complex. Article 43(3) provides that the conformity assessment for the AI system may be carried out as part of the product assessment. This creates a coordination challenge: the product notified body may not have AI expertise, and an AI Act notified body may not have product-specific expertise.
Three coordination models are emerging. The single-body model uses one body designated under both frameworks for an integrated assessment, but requires rare dual competence. The sequential model has the product body assess first, with the AI Act body accepting the product assessment as input, preserving expertise but extending the timeline. The parallel model runs both assessments concurrently with a coordination protocol ensuring shared findings.
The principal Annex I instruments interacting with the AI Act include the Machinery Regulation covering AI in robots and industrial equipment, the Medical Devices Regulation covering clinical AI, the Radio Equipment Directive covering IoT, and the Lifts and Toys Safety Directives. Each has its own documentation requirements, assessment procedures, and notified body ecosystem. The AI Act supplements rather than replaces these requirements.
The organisation should maintain the AISDP and the product technical file as separate, cross-referenced documents with bidirectional references. The AISDP references product-level risk assessments and safety documentation; the product file references AI-specific risk assessment, model documentation, and testing results. A traceability matrix maps overlapping requirements to prevent gaps. The CE marking on the product covers both product safety and AI Act requirements under a single marking.
No. CE marking on the product covers both product safety and AI Act requirements. The Declaration of Conformity must reference both the AI Act and applicable product legislation.
No. AI literacy (Article 4) and prohibited practices (Article 5) already apply. GPAI obligations apply from August 2025. Only Chapter 2 requirements and conformity assessment are deferred to 2027.
This depends on available body competences. Single-body is simplest but requires dual competence (currently rare). Sequential preserves specialist expertise. Parallel is fastest but requires coordination protocol.
Chapter 2 requirements (Articles 8-15), conformity assessment, registration, and CE marking. AI literacy, prohibited practices, and GPAI obligations already apply.
It must account for the physical safety context, supplement the product FMEA, and add fundamental rights assessment which product safety standards do not cover.
Annex I product AI systems benefit from an extended deadline of 2 August 2027. This additional year reflects the complexity of integrating AI Act compliance with existing product conformity frameworks. The extended deadline does not defer Article 4 AI literacy, Article 5 prohibited practices, or GPAI obligations, which apply from their original dates regardless.