We use cookies to improve your experience and analyse site traffic.
The Declaration of Conformity under Article 47 and Annex V is a legal instrument carrying personal and organisational liability. Annex V mandates eight content points, each traceable to the AISDP evidence pack. Signing the Declaration in the face of unresolved non-conformities exposes the signatory to personal liability and the organisation to penalties under Article 99. The AI Liability Directive introduces a rebuttable presumption of fault that makes the AISDP the provider's primary defensive asset.
The Declaration of Conformity under Article 47 requires eight mandatory content points, each populated with accurate, traceable information drawn from the AISDP and the conformity assessment record.
The Declaration of Conformity under Article 47 requires eight mandatory content points, each populated with accurate, traceable information drawn from the aisdp and the conformity assessment record. Point 1 requires the AI system's name, type, and any additional unambiguous reference allowing identification and traceability, drawn from AISDP Module 1 and cross-checked against the EU database entry and the model registry for version alignment. The system reference must be unambiguous: an internal asset identifier or the EU database registration number provides the required traceability. Point 2 requires the name and address of the provider, or where the provider is established outside the EU, the name and address of the authorised representative appointed under Article 22. The Legal and Regulatory Advisor verifies that the legal entity name matches the entity registered in the EU database and that the authorised representative's written mandate covers the AI Act scope. Point 3 is a standard legal statement that the Declaration is issued under the sole responsibility of the provider, using the Annex V wording verbatim. Where a single combined Declaration covers multiple regulations, the sole-responsibility statement must cover each applicable regulation.
Point 4 states that the AI system is in conformity with the AI Act and, if applicable, any other relevant Union law requiring a Declaration of Conformity. This must be supported by the conformity assessment report concluding with an unqualified conformity finding. For combined Declarations covering multiple regulations, the AI System Assessor verifies that each regulation's requirements have been independently assessed and the conformity statement covers each applicable regulation by name and reference.
Point 5 requires a statement of compliance with the GDPR, the EUDPR for EU institutional data, and the Law Enforcement Directive for law enforcement data, where personal data processing is involved. This is supported by a current DPIA covering the system's data processing activities. The DPO Liaison confirms that the DPIA is current and covers all relevant data processing. This statement is omitted only where the system processes no personal data, and the basis for that determination is documented.
Point 6 references the harmonised standards or common specifications relied upon, with each standard identified by its full reference number and title. Where only part of a standard was applied, the specific clauses are identified. Where no harmonised standards are available, the Declaration states this and references the common specifications or international standards used as the compliance baseline.
Point 7 applies where a notified body was involved, recording its name, its four-digit NANDO identification number, a description referencing the Annex VII procedure, and the certificate number and date. For Annex III point 1 systems requiring biometric identification for law enforcement, these fields are populated from the notified body engagement record. For all other Annex III systems assessed internally under Annex VI, the Declaration states that the system was assessed under the internal conformity assessment procedure and that no notified body was involved.
Point 8 records the place of issue, the date, the signatory's name and function, an indication of the person or entity on whose behalf the signature is made, and the personal signature which may be electronic or wet-ink. The Conformity Assessment Coordinator assembles the Declaration by populating each field from the evidence sources identified above. The Legal and Regulatory Advisor reviews the completed Declaration for legal accuracy before the AI Governance Lead signs it, completing a pre-signature checklist that confirms every field is current and supported by the evidence pack.
Where the system is subject to multiple Union regulations requiring a Declaration of Conformity, Article 47(3) permits a single combined Declaration. The combined Declaration must contain all information required to identify each applicable regulation and satisfy the content requirements of each. The Conformity Assessment Coordinator maintains a mapping from each regulation's Declaration requirements to the combined Declaration's content, ensuring no requirement is omitted in the consolidation.
The Declaration of Conformity is not a statement of intent or aspiration but a formal legal assertion that the system, at the time of signing, conforms to each applicable requirement.
The Declaration of Conformity is not a statement of intent or aspiration but a formal legal assertion that the system, at the time of signing, conforms to each applicable requirement. If the system is subsequently found to be non-conforming, the Declaration becomes evidence that the provider either knew or should have known of the non-conformity. A Declaration signed in the face of unresolved non-conformities exposes the signatory to personal liability and the organisation to Tier 3 penalties under Article 99(4) of up to EUR 7,500,000 or 1 per cent of global turnover for providing misleading information.
The legal exposure depends on the nature of the non-conformity and the signatory's state of knowledge at signing. Where the system was genuinely conforming at signing but subsequent drift caused non-conformity, the signatory's exposure is low if the PMM system was operational and the drift was not detectable. Where a non-conformity existed but was not detected by a diligent assessment, both the organisation and signatory face enforcement risk, though demonstrating a genuine, rigorous assessment process is the strongest available defence.
Where a non-conformity was known at signing and the Declaration was signed despite unresolved findings, maximum exposure applies: Article 99(4) penalties for misleading information reaching EUR 7,500,000 or 1 per cent of global turnover, potential personal criminal liability under national law implementing the AI Act's enforcement provisions, and civil liability to affected persons who suffered harm from the non-conforming system. This scenario represents the highest-risk position: knowingly signing a false Declaration has no effective defence.
The AI Governance Lead is the typical signatory, bearing personal accountability for the Declaration's accuracy.
The AI Governance Lead is the typical signatory, bearing personal accountability for the Declaration's accuracy. In some organisations, the CEO, CTO, or Chief Compliance Officer signs. The choice matters: the signatory should have both the organisational authority to bind the provider and sufficient knowledge of the system and the assessment to make the attestation in good faith. A CEO who signs without understanding the system, without reviewing the assessment report, and without confirming that the Conformity Assessment Coordinator has completed the process is personally exposed if the Declaration proves inaccurate. The signatory's knowledge and diligence are the primary factors that determine their personal liability if non-conformity is later discovered.
The AI Governance Lead mitigates the signatory's personal risk through the pre-signature checklist. The checklist confirms that every AISDP module is complete and current, that the conformity assessment report supports an unqualified conformity finding with no outstanding critical non-conformities, that all critical non-conformities are resolved and any remaining non-conformities have approved remediation plans, that the Legal and Regulatory Advisor has reviewed the Declaration and confirmed the claims are supportable by the evidence pack, and that the evidence pack is indexed and retrievable for any subsequent regulatory or judicial scrutiny. The signed checklist is retained as part of the conformity assessment record, providing documentary evidence of the signatory's diligence.
The AI Act is a regulation with direct effect across all Member States, but enforcement mechanisms are implemented through national law. Several Member States have indicated that they will implement personal liability provisions for individuals who sign false or misleading compliance declarations. The Legal and Regulatory Advisor assesses the personal liability regime in each jurisdiction where the system is placed on the market and advises the signatory accordingly, as the personal exposure may differ materially between jurisdictions.
The AI Act's liability framework creates insurable risk that organisations must assess across four insurance categories.
The AI Act's liability framework creates insurable risk that organisations must assess across four insurance categories. Existing coverage may not extend to AI Act non-compliance, and supplementary coverage may be needed.
Product liability insurance under the revised Product Liability Directive (Directive (EU) 2024/2853) now includes software and AI systems within its scope. Defective AI system outputs that cause damage to individuals may give rise to product liability claims. The Legal and Regulatory Advisor reviews the organisation's product liability insurance to confirm that AI system outputs are within the policy's scope and that the policy limits are adequate for the system's deployment scale. A system deployed to thousands of deployers serving millions of affected persons requires materially higher coverage limits than a system serving a single deployer.
Professional indemnity insurance is particularly relevant for organisations providing AI systems as a service. The policy should cover claims arising from AI system failures that cause loss to deployers or affected persons. Key coverage questions include whether the policy covers losses caused by inaccurate AI system outputs, losses caused by fairness deficiencies, losses arising from a failure of the human oversight mechanisms the provider designed, claims arising from a deployer's reliance on incomplete or misleading Instructions for Use, and whether the policy extends to cover regulatory defence costs during Article 74 market surveillance investigations. The PI insurance review is conducted during Phase 3 of the delivery process when the system's risk profile is sufficiently defined to inform the coverage assessment.
The proposed AI Liability Directive introduces a rebuttable presumption of fault for AI systems that substantially increases civil liability exposure.
The proposed AI Liability Directive introduces a rebuttable presumption of fault for AI systems that substantially increases civil liability exposure. Where a claimant demonstrates that an AI system caused damage and the defendant failed to comply with a duty of care including the AI Act's requirements, the claimant need not prove a causal link between the non-compliance and the damage. The court presumes the link exists, and the burden shifts to the defendant to rebut the presumption.
In practice, this transforms the litigation landscape for AI-related claims. An individual who was denied credit by a high-risk AI system can bring a claim against the provider. If the individual demonstrates that the system produced the adverse decision and that the provider failed to comply with any Chapter 2 requirement, for example Article 10 data governance or Article 14 human oversight, the court presumes the non-compliance caused the damage. The provider must then prove, on the balance of probabilities, that the non-compliance did not contribute to the adverse decision. This is a substantially lower evidentiary burden for claimants than the current tort law regime, where the claimant must prove causation directly, and it substantially increases the likelihood of successful claims against providers of non-conforming systems.
A complete, well-maintained AISDP is the provider's primary defensive asset under the AI Liability Directive. If the provider can demonstrate compliance with every applicable Chapter 2 requirement, the presumption of fault does not arise because there is no non-compliance to trigger it. If non-compliance is alleged, the AISDP's evidence pack provides the factual basis for rebuttal. A risk register that identified and mitigated the specific risk at issue, a fairness evaluation that tested the specific subgroup the claimant belongs to, a human oversight log showing that a human reviewer examined the specific case: these are the artefacts that enable the provider to rebut the presumption with specific, traceable evidence rather than general assertions of compliance quality.
Yes. Article 47(3) permits a single combined Declaration, but it must contain all information required to identify each applicable regulation and satisfy each regulation's content requirements independently.
A pre-signature checklist confirming the assessment is complete, all critical non-conformities are resolved, and the Legal and Regulatory Advisor has reviewed the Declaration. D&O insurance should also be reviewed for coverage of AI Act compliance decisions.
It serves as the provider's primary defensive asset. If the provider demonstrates compliance with every Chapter 2 requirement, the rebuttable presumption of fault does not arise. If non-compliance is alleged, the evidence pack provides the basis for rebuttal.
Yes, at least twelve months before. The review should check for regulatory fine exclusions, intentional act exclusions, prior knowledge exclusions, and whether sub-limits are adequate for Article 99 penalties.
It introduces a rebuttable presumption of fault, where non-compliance with the AI Act shifts the burden to the provider to disprove causation.
Product liability, professional indemnity, D&O, and cyber insurance policies, with attention to regulatory fine exclusions and AI-specific coverage gaps.
Where the assessment was inadequate or perfunctory, the signatory may be deemed to have known or to have been recklessly indifferent to the non-conformity. Demonstrating that the assessment was genuine and rigorous, even though it missed the issue, is the strongest available defence. This is why assessment quality matters: the rigour of the conformity assessment process is not merely a compliance exercise but the signatory's primary protection against personal liability.
In jurisdictions where personal liability is possible, the signatory should confirm that their directors' and officers' insurance covers AI Act compliance decisions before signing. The insurance assessment must be completed before the Declaration is signed, not after: once the signature is applied, the personal liability is activated and retrospective insurance arrangements may not provide coverage for the existing exposure.
Directors' and officers' insurance protects the Declaration signatory against personal liability arising from management decisions. The Declaration of Conformity is a management decision with potentially significant financial consequences. The Legal and Regulatory Advisor reviews the D&O policy against four specific coverage gap scenarios.
First, many D&O policies exclude coverage for regulatory fines. If the policy excludes regulatory penalties, Article 99 fines are not covered and the signatory bears personal exposure for the fine amount. The Legal and Regulatory Advisor assesses whether the exclusion applies to the specific fine categories under Article 99 and whether the insurer would defend the claim even if the fine itself is excluded. Second, D&O policies universally exclude coverage for fraud, dishonesty, and intentional wrongdoing. If the signatory knowingly signed a false Declaration, the intentional act exclusion eliminates coverage entirely. This reinforces the importance of genuine assessment quality: the signatory's best protection is an honest, rigorous assessment process.
Third, some policies exclude coverage for claims arising from facts the insured was aware of before the policy inception date. Organisations transitioning to AI Act compliance should disclose their AI system portfolio and compliance status to the insurer at renewal to avoid triggering this exclusion. Fourth, even where the D&O policy covers AI Act claims, the sub-limits may be insufficient given the penalty scale reaching EUR 15 million or 3 per cent of turnover.
The Legal and Regulatory Advisor should review the existing D&O policy against these four scenarios at least twelve months before the August 2026 deadline, discuss AI Act exposure with the insurer seeking written confirmation of coverage scope, consider a policy endorsement specifically addressing AI Act compliance decisions if the standard language is ambiguous, and ensure the signatory's AI Act responsibilities are included in the schedule of insured positions.
Cyber insurance should be reviewed to confirm coverage of AI-specific incident types including model extraction, data poisoning, and adversarial attacks, which may fall outside traditional cyber insurance policies. The policy's incident response provisions should align with the AI-specific incident response plan documented in the cybersecurity section. The Legal and Regulatory Advisor conducts the insurance review during Phase 3 of the delivery process, when the system's risk profile is sufficiently defined to inform the coverage assessment. The review findings are documented and shared with the AI Governance Lead and the organisation's risk management function.
The Directive also grants national courts the power to order the disclosure of evidence from the provider, including technical documentation. Organisations should assume that any competent authority request for the AISDP may be followed by a civil court disclosure order for the underlying evidence pack. Documentary gaps that might survive a compliance review may not survive litigation discovery. The evidence pack must withstand both regulatory and judicial scrutiny.
The Directive's rebuttable presumption substantially increases both the likelihood and potential quantum of civil liability claims against AI system providers. The Legal and Regulatory Advisor assesses the implications for the organisation's product liability, professional indemnity, and general liability insurance coverage. Insurers are expected to increase premiums for AI system providers as the Directive's impact becomes quantifiable. Early engagement with the insurer, supported by evidence of a robust compliance programme documented in the AISDP, may mitigate premium increases by demonstrating that the organisation's risk profile is well-managed.