We use cookies to improve your experience and analyse site traffic.
Articles 47 and 48 establish the formal outputs of conformity assessment: the Declaration of Conformity and the CE marking. Together with the harmonised standards framework under Article 40, these create enforceable obligations with personal and organisational liability consequences for providers of high-risk AI systems.
The conformity assessment procedure produces several outputs with distinct legal significance: the CE marking, the Declaration of Conformity, and associated liability consequences.
The conformity assessment procedure produces several outputs with distinct legal significance: the CE marking, the Declaration of Conformity, and associated liability consequences. These outputs determine what happens after assessment is complete. They address the harmonised standards against which conformity is presumed, the marking that signals compliance to the market, the pathways available when conformity cannot be demonstrated, and the legal exposure that attaches to the provider's formal declaration.
Understanding these outputs is essential because each carries enforceable obligations. The CE marking is not a quality label; it is a regulatory requirement under Article 48. The Declaration of Conformity is not a statement of intent; it is a legal instrument under Article 47 whose signatory assumes personal and organisational liability. Assessment failure does not end the compliance journey; it triggers defined pathways with specific business consequences. Together, these elements form the compliance endpoint that every aisdp preparation process must reach.
Harmonised standards published in the Official Journal create a presumption of conformity under Article 40, shifting the burden of proof from the provider to the authority.
Harmonised standards published in the Official Journal create a presumption of conformity under Article 40, shifting the burden of proof from the provider to the authority. As of early 2026, the CEN/CENELEC JTC 21 standards development process remains ongoing. The first tranche of harmonised standards is anticipated but not yet finalised.
In the interim, reference to international standards provides a credible compliance framework. The most relevant standards include ISO/IEC 42001:2023 on AI management systems, ISO/IEC 23894:2023 on AI risk management, ISO/IEC 25012:2008 on data quality, ISO/IEC 25010:2011 on system and software quality, ISO/IEC 27001:2022 on information security management, and ISO/IEC 38507:2022 on governance implications of AI.
Organisations should monitor the JTC 21 working groups closely. Once harmonised standards are published, compliance with them significantly strengthens the internal conformity assessment. The presumption of conformity means the authority must demonstrate non-compliance rather than the provider having to prove compliance. This is a material shift in the enforcement dynamic.
The transition from international standards to harmonised standards will require assessment updates. Organisations should plan for a re-mapping exercise once harmonised standards are published, tracing existing compliance evidence to the new standard's requirements and identifying any gaps the harmonised standard introduces beyond the international standard baseline.
Article 48 requires high-risk AI systems to bear the CE marking after the Declaration of Conformity is signed.
Article 48 requires high-risk AI systems to bear the CE marking after the Declaration of Conformity is signed. For digital systems, the marking is displayed in the user interface and accompanying documentation. The CE marking must be visible, legible, and indelible. It is affixed by the Conformity Assessment Coordinator before the system is placed on the market or put into service.
The CE marking signals to deployers, users, and market surveillance authorities that the system has undergone conformity assessment and that the provider has declared compliance with the applicable requirements. It is not a quality certification or an endorsement; it is a regulatory statement that the system has been assessed against the Act's requirements. For deployers, the presence of the CE marking is a prerequisite for lawful use of the system.
Affixing the CE marking to a non-conforming system is an offence under Article 49(5), carrying penalties under Article 99. This makes the marking decision consequential. The Conformity Assessment Coordinator must confirm that the assessment is complete, the Declaration is signed, and the system genuinely conforms before affixing the marking. The marking must be applied to the system itself (in the user interface for digital systems), to the packaging where applicable, and to the accompanying documentation.
For purely digital AI systems delivered as software or SaaS, the CE marking takes a different form than for physical products. The marking should appear in the system's user interface, typically on an about or compliance information screen, and in the technical documentation provided to deployers. The system's registration in the EU database should reference the CE marking. Where the system is embedded in a physical product that already bears CE marking under other Union harmonisation legislation, the AI Act CE marking may be combined with the existing marking provided each regulation's requirements are separately satisfied.
Assessment failure is not a theoretical edge case; it is a likely outcome for organisations undertaking AISDP preparation for the first time, particularly for systems developed before the AI Act's requirements were understood.
Assessment failure is not a theoretical edge case; it is a likely outcome for organisations undertaking AISDP preparation for the first time, particularly for systems developed before the AI Act's requirements were understood. The assessment framework must include a defined pathway for this outcome. Three pathways are available when non-conformities are identified.
Remediation and re-assessment is the most common pathway. Non-conformities are remediated, the affected AISDP modules updated, the remediated areas re-assessed, and the assessment proceeds to completion. This pathway is appropriate when non-conformities are bounded and remediation is technically feasible within a reasonable timeframe. The AI System Assessor scopes the re-assessment to the remediated areas; a full re-assessment is not necessary unless the remediation affected the system's architecture or intended purpose.
Deployment deferral applies when non-conformities are fundamental. If the system's architecture does not support the required human oversight, the training data cannot be shown to be representative, or the model's explainability is insufficient for the deployment context, remediation may require rearchitecture or redevelopment. The system cannot be deployed until the fundamental issues are resolved. The AI Governance Lead communicates the deferral to the Business Owner, including the revised deployment timeline and the resource requirements for remediation. Deployment deferral is a significant business decision, but deploying a non-conforming system carries greater risk.
For systems subject to third-party CONFORMITY ASSESSMENT, including biometric identification systems under Annex III point 1 or where the provider voluntarily engages a notified body, the notified body may decline to certify the system.
For systems subject to third-party conformity assessment, including biometric identification systems under Annex III point 1 or where the provider voluntarily engages a notified body, the notified body may decline to certify the system. A notified body rejection carries greater consequence than an internal assessment failure for three reasons.
First, the rejection is documented in the notified body's records. Second, the rejection may be communicated to the competent authority depending on the notified body's reporting obligations. Third, the provider cannot self-certify as an alternative for systems where third-party assessment is mandatory under Article 43(1).
When a notified body identifies non-conformities, it will typically provide a detailed report specifying the deficiencies. The provider should treat this report as a remediation plan, address each deficiency systematically, and resubmit for assessment. Multiple rounds of review and remediation are common in conformity assessment practice. Providers should budget for at least two to three assessment cycles when planning for notified body engagement.
The distinction between internal and notified body failure is important for resource planning. Internal assessment failures can be managed iteratively within the organisation's own timeline. Notified body rejections introduce external dependencies, including the notified body's scheduling constraints, its own quality assurance processes, and potential reputational effects in the market.
The Declaration of Conformity under Article 47 and Annex V is a legal instrument with eight mandatory content points.
The Declaration of Conformity under Article 47 and Annex V is a legal instrument with eight mandatory content points. Each point must be populated with accurate, traceable information drawn from the AISDP and the conformity assessment record. By signing it, the provider declares that the high-risk AI system fulfils the requirements of the AI Act.
Point 1 requires the AI system name, type, and any additional unambiguous reference allowing identification and traceability. Source this from AISDP Module 1 (System Description): system name, version identifier, and unique system reference. Cross-check against the EU database registration and the model registry for version alignment.
Point 2 requires the name and address of the provider or, where applicable, their authorised representative. For third-country providers, confirm the authorised representative appointment under Article 22 and that the written mandate covers the AI Act scope.
Point 3 is a standard legal statement that the Declaration is issued under the sole responsibility of the provider. Use the Annex V wording verbatim. Where a single combined Declaration covers multiple regulations under Article 47(3), the sole-responsibility statement must cover each applicable regulation.
Point 4 states that the AI system is in conformity with the Regulation and, if applicable, any other relevant Union law requiring a Declaration of Conformity. The conformity assessment report must conclude with an unqualified conformity finding. For combined Declarations, verify each regulation's requirements have been independently assessed.
The Declaration is a formal legal assertion that the system, at the time of signing, conforms to each applicable requirement.
The Declaration is a formal legal assertion that the system, at the time of signing, conforms to each applicable requirement. If the system is subsequently found to be non-conforming, the Declaration becomes evidence that the provider either knew or should have known of the non-conformity. The legal exposure depends on the signatory's state of knowledge at the time.
Where the system was genuinely conforming at signing but subsequent drift caused non-conformity, the organisation's post-market monitoring obligations under Article 72 are engaged. The signatory's personal exposure is low if the PMM system was operational and the drift was not detectable at the time.
Where a non-conformity existed at signing but was not detected by the internal assessment, both organisation and signatory face enforcement risk under Article 99. The signatory's exposure depends on whether the assessment was conducted with reasonable diligence and in accordance with Annex VI.
Where a non-conformity was known at signing and the Declaration was signed despite unresolved findings, maximum exposure applies. Article 99(4) penalties for providing misleading information reach up to EUR 7.5 million or one per cent of global turnover. Potential personal criminal liability may arise under national law implementing the AI Act's enforcement provisions, along with civil liability to affected persons.
Where the assessment was inadequate or perfunctory, significant exposure arises. The signatory may be deemed to have known or to have been recklessly indifferent to the non-conformity. Demonstrating that the assessment was genuine and rigorous, even though it missed the issue, is the strongest available defence. This is why assessment quality, as covered in , matters.
The AI Governance Lead is the typical signatory, though in some organisations the CEO, CTO, or Chief Compliance Officer signs.
The AI Governance Lead is the typical signatory, though in some organisations the CEO, CTO, or Chief Compliance Officer signs. The choice of signatory matters because the individual must have both the organisational authority to bind the provider and sufficient knowledge of the system and the assessment to make the attestation in good faith. A senior executive who signs without understanding the system, without reviewing the assessment report, and without confirming the assessment process is complete is exposed if the Declaration proves inaccurate.
The signatory mitigates personal risk through the pre-signature checklist. This checklist confirms that every AISDP module is complete and current, that the conformity assessment report supports an unqualified conformity finding, that all critical non-conformities are resolved, that the Legal and Regulatory Advisor has reviewed the Declaration, and that the evidence pack is indexed and retrievable. The signed checklist is retained as part of the conformity assessment record.
National law variations add complexity. The AI Act is a regulation with direct effect across all Member States, but enforcement mechanisms are implemented through national law. Several Member States have indicated they will implement personal liability provisions for individuals who sign false or misleading compliance declarations. The Legal and Regulatory Advisor assesses the personal liability regime in each jurisdiction where the system is placed on the market and advises the signatory accordingly. In jurisdictions where personal liability is possible, the signatory should confirm that directors' and officers' insurance covers AI Act compliance decisions before signing.
The AI Act's liability framework creates insurable risk across four categories.
The AI Act's liability framework creates insurable risk across four categories. Organisations should assess whether existing insurance coverage extends to AI Act non-compliance or whether supplementary coverage is needed. The Legal and Regulatory Advisor conducts the insurance review during Phase 3 (Architecture and Design) of the delivery process, when the system's risk profile is sufficiently defined to inform the assessment.
Product liability insurance is relevant because the revised Product Liability Directive (Directive (EU) 2024/2853) includes software and AI systems within its scope. Defective AI system outputs that cause damage to individuals may give rise to product liability claims. The provider's policy should be reviewed to confirm AI system outputs are within scope and that policy limits are adequate for the system's deployment scale.
Professional indemnity insurance is more relevant for organisations providing AI systems as a service (SaaS-based high-risk AI systems). The policy should cover claims arising from AI system failures, including fairness deficiencies, inaccurate outputs, and failures of human oversight mechanisms. Key coverage questions include whether the policy covers losses caused by inaccurate outputs, fairness deficiencies, human oversight failures, incomplete Instructions for Use, and regulatory defence costs under Article 74 market surveillance investigations.
Directors' and officers' insurance protects the Declaration signatory against personal liability. Four specific coverage gaps must be assessed. Regulatory fine exclusions may leave Article 99 fines uncovered. Intentional act exclusions eliminate coverage for knowingly signing a false Declaration. Prior knowledge exclusions may be triggered if AI compliance status was not disclosed at policy inception. Sub-limit adequacy must be assessed against Article 99(3) penalties reaching EUR 15 million or three per cent of global turnover. The Legal and Regulatory Advisor should review the D&O policy at least twelve months before the August 2026 deadline, seek written confirmation of coverage scope from the insurer, and consider a policy endorsement specifically addressing AI Act compliance decisions.
The proposed AI Liability Directive (COM/2022/496) introduces a rebuttable presumption of fault for AI systems, substantially lowering the evidentiary burden for claimants.
The proposed AI Liability Directive (COM/2022/496) introduces a rebuttable presumption of fault for AI systems, substantially lowering the evidentiary burden for claimants. Where a claimant demonstrates that an AI system caused damage and the defendant failed to comply with a duty of care including the AI Act's requirements, the court presumes a causal link between the non-compliance and the damage. The burden shifts to the defendant to rebut the presumption.
In practice, an individual denied credit by a high-risk AI system can bring a claim against the provider. If the individual demonstrates the system produced the adverse decision and the provider failed to comply with any Chapter 2 requirement, such as Article 10 data governance or Article 14 human oversight, the court presumes the non-compliance caused the damage. The provider must then prove, on the balance of probabilities, that the non-compliance did not contribute to the adverse decision. This is a substantially lower evidentiary burden for claimants than the current tort law regime, where the claimant must prove causation directly.
The AISDP as defensive evidence. A complete, well-maintained AISDP is the provider's primary defensive asset under the AI Liability Directive. If the provider can demonstrate compliance with every applicable Chapter 2 requirement, the presumption of fault does not arise because there is no non-compliance to trigger it. If non-compliance is alleged, the AISDP's evidence pack provides the factual basis for rebuttal. A risk register that identified and mitigated the specific risk at issue, a fairness evaluation that tested the specific subgroup, a human oversight log showing a human reviewer examined the specific case: these artefacts enable the provider to rebut the presumption.
Not immediately. Three pathways exist: remediate and re-assess if issues are bounded, defer deployment until fundamental issues are resolved, or withdraw the system entirely. Deploying a non-conforming system carries greater risk than any of these alternatives.
The rejection is documented in the notified body's records and may be communicated to the competent authority. The provider cannot self-certify as an alternative for mandatory third-party systems. The notified body typically provides a detailed deficiency report that serves as a remediation plan. Budget for two to three assessment cycles.
Potentially, yes. The AI Act has direct effect, but enforcement is implemented through national law. Several Member States have indicated personal liability provisions for individuals signing false or misleading compliance declarations. The signatory should confirm D&O insurance coverage before signing.
A complete AISDP is the provider's primary defensive asset. If the provider demonstrates compliance with every Chapter 2 requirement, the rebuttable presumption of fault does not arise. If non-compliance is alleged, the evidence pack provides the factual basis to rebut the presumption in court.
During Phase 3 (Architecture and Design) of the delivery process, when the system's risk profile is sufficiently defined. The Legal and Regulatory Advisor should review D&O coverage at least twelve months before the August 2026 deadline and assess all four insurance categories against AI Act exposure.
Under Article 40, compliance with harmonised standards published in the Official Journal shifts the burden of proof from the provider to the authority. Pending JTC 21 standards, international standards like ISO/IEC 42001 provide a credible interim framework.
Annex V specifies eight mandatory content points: system identification, provider details, sole responsibility statement, conformity statement, personal data compliance, standards references, notified body details where applicable, and signatory information.
Three pathways: remediation and re-assessment for bounded issues, deployment deferral for fundamental non-conformities requiring rearchitecture, or system withdrawal when issues are irremediable within economic or technical constraints.
Exposure ranges from low (genuine post-signing drift with operational PMM) to maximum (knowingly false declaration: Article 99(4) penalties up to EUR 7.5 million or one per cent of turnover, plus potential personal criminal liability).
Four categories: product liability (covering AI outputs under the revised Product Liability Directive), professional indemnity (for SaaS providers), directors' and officers' (protecting the Declaration signatory), and cyber insurance (for AI-specific incidents).
It introduces a rebuttable presumption of fault: if a provider failed to comply with AI Act requirements and the system caused damage, the court presumes causation. The provider must rebut this, reversing the normal burden of proof.
System withdrawal is the final option when non-conformities are irremediable within the system's economic or technical constraints. The system may need to be withdrawn from the AISDP preparation process entirely. This may lead to decommissioning, replacement with an alternative that can achieve conformity, or reclassification if the assessment reveals the system's actual risk profile differs from the initial classification (a CDR review may be appropriate in this case). The AI Governance Lead documents the withdrawal decision with the rationale, the non-conformities that triggered it, and the alternatives considered. Withdrawal is a significant business decision that should be communicated to all stakeholders, including the Business Owner, the deployment team, and any deployers who were expecting the system.
Organisations should also recognise that notified body capacity may be constrained in the early years of AI Act enforcement. The number of designated notified bodies for AI Act assessment is limited, and demand from providers of biometric identification systems and those voluntarily seeking third-party assessment may exceed supply. Early engagement with potential notified bodies, including preliminary discussions about assessment scope and timeline, is a prudent risk mitigation measure.
Point 5 addresses personal data processing. Where personal data is involved, include statements of compliance with the GDPR, the EUDPR (Regulation 2018/1725), and the Law Enforcement Directive (Directive 2016/680). Omit this statement only where the system processes no personal data, and document the basis for that determination.
Point 6 references any relevant harmonised standards used or common specifications against which conformity is declared. List each standard by full reference number and title. Where only part of a standard was applied, identify the specific clauses. Where no harmonised standards are available, reference the international standards or common specifications used as the compliance baseline.
Point 7 applies where a notified body was involved. Include the notified body's name, its four-digit NANDO identification number, a description referencing the Annex VII procedure, and the certificate number and date. For systems assessed internally under Annex VI, state that no notified body was involved.
Point 8 records the place and date of issue, name and function of the signatory, indication of the person or entity on whose behalf the signature is made, and the signature itself. The signatory must have corporate authority to bind the provider. Machine-readable format requirements under Article 47(1) must be satisfied for the electronic version.
The Conformity Assessment Coordinator assembles the Declaration by populating each field from these evidence sources. The Legal and Regulatory Advisor reviews for legal accuracy before the AI Governance Lead signs. A pre-signature checklist covering each of the eight fields should be completed and retained as part of the conformity assessment record.
Where the high-risk AI system is also subject to other Union harmonisation legislation requiring a Declaration of Conformity, Article 47(3) allows the provider to draw up a single combined Declaration. The combined Declaration must contain all information required to identify each applicable regulation and must satisfy the content requirements of each. The Conformity Assessment Coordinator should maintain a mapping from each regulation's Declaration requirements to the combined Declaration's content, ensuring that no requirement is omitted in the consolidation. This is particularly relevant for AI systems embedded in products already subject to Machinery Regulation, Medical Devices Regulation, or other sector-specific legislation.
The AI Governance Lead, as typical signatory, should ensure the internal conformity assessment is complete, all critical non-conformities are resolved, any remaining non-conformities are documented with approved remediation plans, and the assessment report supports the Declaration's claims. The Legal and Regulatory Advisor reviews the Declaration before signature, confirming that the claims are supportable by the evidence pack.
A Declaration signed in the face of unresolved non-conformities is the highest-risk scenario. The distinction between "the assessment missed it" and "we knew and signed anyway" is the difference between a defensible position and an indefensible one. Organisations must ensure that the governance process includes a formal sign-off gate where the AI Governance Lead reviews the assessment report, the non-conformity register, and the remediation status before the Declaration is presented for signature.
Cyber insurance must be assessed for AI-specific incident types. Model extraction, data poisoning, and adversarial attacks may fall outside traditional cyber insurance policies. Confirm that AI-specific incident types are covered and that incident response provisions align with the AI-specific incident response plan.
The insurance review should be documented and shared with the AI Governance Lead and the organisation's risk management function. Insurance findings inform the overall risk assessment and may affect decisions about system deployment, market scope, and acceptable risk thresholds. Organisations that cannot obtain adequate coverage for a specific AI system deployment should factor the uninsured risk into their market placement decision.
The Directive also grants national courts the power to order disclosure of evidence from the provider, including technical documentation. Organisations should assume that any competent authority request for the AISDP may be followed by a civil court disclosure order for the underlying evidence pack. The evidence pack must withstand both regulatory and judicial scrutiny. Documentary gaps that might survive a compliance review may not survive litigation discovery.
Insurance implications. The rebuttable presumption substantially increases the likelihood and potential quantum of civil liability claims. The Legal and Regulatory Advisor assesses the implications for product liability, professional indemnity, and general liability insurance coverage. Insurers are expected to increase premiums for AI system providers as the Directive's impact becomes quantifiable. Early engagement with the insurer, supported by evidence of a robust compliance programme, may mitigate premium increases.
The combination of the AI Act's regulatory penalties and the AI Liability Directive's civil liability framework creates a dual exposure for providers. Regulatory non-compliance triggers Article 99 penalties from authorities. The same non-compliance simultaneously lowers the barrier for civil claims from affected persons. The AISDP serves as the single evidence base that addresses both dimensions, which is why its completeness and accuracy are existential concerns for providers of high-risk AI systems.