We use cookies to improve your experience and analyse site traffic.
Article 26 of the EU AI Act imposes eight distinct obligations on deployers of high-risk AI systems, each with penalty exposure reaching EUR 15 million or 3 per cent of global annual turnover. This handbook provides the operational reference for meeting those obligations.
A deployer is any organisation that uses a high-risk AI system under its own authority.
A deployer is any organisation that uses a high-risk AI system under its own authority. If the organisation purchased, licensed, or procured an AI system from another entity and operates it in a context where it affects natural persons, it is a deployer. The deployer's obligations are distinct from the provider's: a deployer does not prepare a full aisdp or conduct a conformity assessment, but bears its own substantial compliance responsibilities.
Article 26 imposes eight distinct obligations on deployers. First, use the system in accordance with the provider's Instructions for Use, operating within documented boundaries for purposes, populations, and contexts. Second, assign human oversight to individuals with the competence, training, and authority to exercise it effectively. The word "authority" carries significant weight: an operator trained to use the system but without organisational authority to override its recommendations does not exercise oversight in the statutory sense.
Third, ensure input data is relevant and sufficiently representative to the extent the deployer controls the input data. Fourth, monitor the system's operation and suspend it if it presents a risk. Fifth, inform the provider of serious incidents without undue delay. Sixth, retain automatically generated logs for a minimum of six months. Seventh, inform workers and their representatives before putting the system into service in an employment context. Eighth, conduct and publish a Fundamental Rights Impact Assessment under Article 27 before putting the system into service.
Two additional obligations apply to specific categories: public authority deployers must register in the EU database under Article 49(3), and all deployers must ensure AI literacy among staff under Article 4. The penalty exposure for each obligation reaches EUR 15 million or 3 per cent of global annual turnover. The supply chain may involve importers under Article 23 who verify third-country provider compliance, distributors under Article 24 who verify CE marking, and authorised representatives under Article 22 who act on behalf of third-country providers.
Article 25(1) defines three circumstances in which a deployer's status escalates to that of a provider, carrying the full weight of provider obligations including preparing a complete AISDP, conducting a conformity assessment, signing a Declaration of Conformity, and registering in the EU database.
Article 25(1) defines three circumstances in which a deployer's status escalates to that of a provider, carrying the full weight of provider obligations including preparing a complete AISDP, conducting a conformity assessment, signing a Declaration of Conformity, and registering in the EU database. Failing to recognise the escalation is itself a compliance violation.
The first trigger is rebranding. If a deployer offers a third-party AI system to others under its own commercial identity, it becomes the provider. The test is whether end users would reasonably understand the organisation to be responsible for the system. This catches white-labelling arrangements, reseller models, and platform integrations.
The second trigger is substantial modification. Article 25(1)(b) treats a deployer who makes a substantial modification to a high-risk system as its provider. A substantial modification under Article 3(23) is a change that affects compliance or modifies the intended purpose. The most common trigger in practice is fine-tuning a general-purpose AI model for a specific high-risk use case. Adapting a language model for recruitment screening or credit assessment almost always changes the intended purpose. Not every configuration change qualifies: adjusting thresholds within ranges specified in the Instructions for Use or selecting pre-defined configuration options are generally deployer-level activities.
The third trigger is changing the intended purpose. If a deployer uses the system for a purpose outside the provider's documentation and the new purpose causes the system to become high-risk, the deployer becomes the provider for that use. The provider's conformity assessment and Declaration of Conformity do not cover uses outside the documented intended purpose.
The deployer's compliance depends on information only the provider can supply.
The deployer's compliance depends on information only the provider can supply. Eight items should be requested before the system is put into service and made contractual conditions of procurement.
The Declaration of Conformity under Article 47 confirms the conformity assessment is complete. CE marking confirmation under Article 48 provides visual confirmation. The Instructions for Use under Article 13 define intended purpose, limitations, oversight requirements, and monitoring guidance. The EU database registration reference under Article 49 verifies registration status. A performance and fairness evaluation summary under Articles 9 and 10 is required for an informed FRIA. Serious incident reporting procedures under Article 73 ensure the deployer knows how to report. An update and version management policy under Articles 12 and 18 ensures the deployer knows when changes occur. An end-of-life policy under Article 18 enables transition planning.
If the provider refuses any item, the deployer records the gap and assesses whether it prevents fulfilment of its own obligations. For the Declaration of Conformity, CE marking, and Instructions for Use, refusal should prevent the system from being put into service. For other items, the deployer must document the gap, assess the compliance risk, and implement compensating measures.
The procurement contract should address the provider's obligation to supply and maintain current Instructions for Use, notification of updates, cooperation with the deployer's FRIA including disaggregated fairness data, participation in serious incident investigations, allocation of liability, and consequences of provider insolvency including documentation escrow arrangements. Deployers do not always procure directly from the provider; intermediary importers and distributors may be involved, and the deployer should direct information requests through the appropriate channel.
Article 27 requires deployers to assess the impact on fundamental rights before putting the system into service.
Article 27 requires deployers to assess the impact on fundamental rights before putting the system into service. The FRIA is the single most demanding documentation obligation the deployer faces, with no direct predecessor in EU law. The GDPR's Data Protection Impact Assessment covers data processing risks; the FRIA covers the full range of Charter of Fundamental Rights protections.
The FRIA assesses impact against Charter rights relevant to the deployment context: human dignity under Article 1, respect for private life under Article 7, non-discrimination under Article 21, rights of the child under Article 24, fair working conditions under Article 31, and the right to an effective remedy under Article 47. Rights are eliminated only where the deployer demonstrates they are not engaged.
The six-step methodology begins with scope definition: identifying the system, its purpose in the deployer's context, affected persons, and supported decisions. Rights mapping identifies every engaged Charter right. Impact analysis assesses nature, severity, and likelihood of adverse impact for each right. Intersectional analysis assesses whether impact differs for persons at the intersection of multiple protected characteristics; a system may perform adequately for individual groups yet produce adverse outcomes for intersectional subgroups.
Mitigation assessment evaluates whether the provider's controls adequately address risk in the deployer's context, defining supplementary controls where they do not. Stakeholder consultation engages affected groups, demonstrating how input influenced conclusions. Article 27(4) requires notification to the relevant market surveillance authority. Public authority deployers must publish a summary.
A deployer that does not trigger Article 25 provider status prepares a deployer compliance record documenting its own obligations in an eight-module structure.
A deployer that does not trigger Article 25 provider status prepares a deployer compliance record documenting its own obligations in an eight-module structure.
Module D1 covers system identification and provider reference: system name, version, provider identity, EU database registration reference, Declaration of Conformity receipt, and the date the system was put into service. Module D2 documents intended purpose and deployment context: the deployer's specific use, operational domain, decision categories, geographic scope, affected persons, and any configurations with assessment of whether they remain within provider parameters.
Module D3 contains the full Fundamental Rights Impact Assessment under Article 27 with stakeholder consultation, intersectional analysis, authority notification, and supplementary controls. Module D4 documents human oversight arrangements: the oversight structure mapped to Instructions for Use, personnel qualifications, training records, override procedures, break-glass procedures, and break-glass test results.
Module D5 covers monitoring and incident response: monitoring activities aligned to the Instructions for Use, metrics tracked, thresholds, review frequency, log retention under Article 26(6) for minimum six months, the incident register, triage process, escalation path, and all deployer-provider communications. Module D6 addresses data protection: DPIA where required, lawful basis, data processing agreement, and data subject rights procedures including Article 86 right to explanation.
Module D7 covers registration for public authority deployers under Article 49(3). Module D8 documents compliance review and maintenance: quarterly review schedule, context change assessment, Article 25 reassessment process, and version history. The compliance record is maintained as a living document, reviewed quarterly during normal operations, and updated whenever material changes occur.
Technical controls are effective only if the people operating the system use them correctly.
Technical controls are effective only if the people operating the system use them correctly. The deployer oversight pyramid organises human and organisational oversight across five levels.
Level 1, technical monitoring, is staffed by IT operations and platform engineers. They provide continuous monitoring of availability, response times, error rates, and log integrity with automated alerting. Escalation triggers include unavailability, error spikes, log failures, and unexpected changes following provider updates.
Level 2, AI system operators, comprises the human operators who interact with outputs daily: recruiters, clinicians, case workers, claims assessors. They exercise real-time override, intervention, and escalation capabilities. Training covers the system's capabilities, limitations, confidence indicators, and failure modes. The automation bias risk requires specific countermeasures: delayed display requiring independent assessment before viewing recommendations, calibration cases with known incorrect outputs, review time monitoring, and performance incentives rewarding thoroughness over throughput.
Level 3, operational management, tracks business-level metrics including complaint volumes, appeal rates, override rates per operator, and affected person feedback. They detect intent drift where operators develop workarounds or business units extend use without governance review.
Level 4, compliance, legal, and data protection functions, oversee regulatory posture through regular reporting from lower levels. They conduct regulatory horizon scanning of AI Office guidance, enforcement actions, and legislative amendments.
Level 5, executive leadership, provides strategic oversight, resource allocation, and risk appetite setting with quarterly reporting during normal operations and immediate briefing for serious incidents.
Article 26(4) requires monitoring on the basis of the Instructions for Use.
Article 26(4) requires monitoring on the basis of the Instructions for Use. The deployer monitors four dimensions: output consistency checking for distributional shifts, human oversight metrics including override rates and review dwell times, complaint and appeal patterns for clusters alleging similar harm, and operational context stability assessing whether the population or decision context has changed.
The monitoring schedule ranges from weekly output distribution reviews at Level 1 through monthly override and complaint analysis at Level 3 to quarterly FRIA validity checks and full oversight reviews at Level 4, with an annual oversight audit by internal audit. Suspension triggers include output patterns suggesting systematic bias, sudden unexplained distributional change, error rates exceeding thresholds, use outside documented purpose, serious incidents, or any belief that harm is occurring.
AI literacy under Article 4, enforceable since February 2025, requires operators to understand how the system generates outputs, know its capabilities and limitations, interpret confidence indicators, recognise the difference between the system's recommendation and their own judgement, understand automation bias, and know override and escalation procedures. Training includes initial certification, annual refreshers, and periodic calibration exercises with known-incorrect recommendations.
For serious incident detection, Article 3(49) defines serious incidents as those resulting in death, serious health harm, irreversible critical infrastructure disruption, fundamental rights infringement, or serious property or environmental harm. Reporting timelines range from 2 days for widespread rights infringement to 15 days for other incidents. The deployer notifies the provider first; if the provider is unreachable or unresponsive, the deployer reports directly to the market surveillance authority. Every event assessed against Article 3(49) is logged in the incident register regardless of outcome.
The penalty exposure for each Article 26 obligation reaches EUR 15 million or 3 per cent of global annual turnover.
The penalty exposure for each Article 26 obligation reaches EUR 15 million or 3 per cent of global annual turnover. This applies to all eight deployer obligations equally. The most likely enforcement triggers in practice are failure to conduct the FRIA before putting the system into service, failure to assign competent human oversight personnel, failure to report serious incidents to the provider or authority, and use outside the documented intended purpose.
Regulatory inspection readiness requires having immediately accessible: the deployer compliance record with all eight modules, the FRIA and stakeholder consultation records, human oversight personnel records and training certificates, monitoring logs and incident register, deployer-provider communication records, log retention evidence under Article 26(6), worker notification records for employment context, and the EU database registration for public authority deployers. The response protocol defines who receives the authority request, who coordinates the response, who reviews documents before production, the maximum response time, and how privileged material is handled.
Oversight fatigue and long-term vigilance require attention. Normalisation of deviance occurs when small compliance shortcuts become accepted practice: operators stop reviewing calibration cases, monitoring reports go unread, break-glass tests are deferred. The AI Governance Lead should conduct an annual deviation audit examining whether documented procedures are still followed. Budget and staffing continuity are critical: AI compliance oversight is a sustained operational cost, not a one-time implementation. Organisations that cut oversight budgets after the initial compliance push expose themselves to enforcement risk.
When a provider withdraws or discontinues a system, the deployer must manage the transition while maintaining compliance throughout.
When a provider withdraws or discontinues a system, the deployer must manage the transition while maintaining compliance throughout. Transition planning requires assessing alternative systems, planning migration timelines, and ensuring continuity of service to affected persons during transition. The provider's end-of-life policy, requested at procurement, determines the notice period and transition support available.
Downstream decision review is required where the decommissioned system's outputs continue to affect individuals. Credit assessments, recruitment decisions, and welfare eligibility determinations may remain in effect long after the system is withdrawn. The deployer assesses which historical decisions remain consequential and whether those decisions should be reviewed in light of the system's withdrawal, particularly if the withdrawal was triggered by a compliance concern.
Data lifecycle closure requires reconciling retention obligations with deletion requirements. Automatically generated logs under Article 26(6) are retained for the minimum six-month period or longer if required by the context. Personal data processed by the system is deleted or anonymised in accordance with the GDPR retention policy. The FRIA, deployer compliance record, and incident register are retained as evidence.
Compliance record finalisation involves closing Module D8 with the final compliance review, recording the system's decommission date and reason, documenting the transition plan execution, and archiving the complete deployer compliance record. Deployer-provider communication records are preserved to document the notification and support received during the transition.
No. A deployer that does not trigger Article 25 provider status prepares a deployer compliance record with eight modules, not a full AISDP. The provider prepares the AISDP; the deployer's documentation covers its own obligations under Article 26.
For the Declaration of Conformity, CE marking, or Instructions for Use, do not put the system into service. For other items, document the gap, assess compliance risk, and implement compensating measures. All gaps are recorded in Module D8 of the deployer compliance record.
Any person at Level 2 (operators) or above in the oversight pyramid. Requiring senior management approval before stopping a harmful system introduces dangerous delay. The governance policy must protect anyone who triggers break-glass in good faith.
A five-level pyramid: technical monitoring (IT ops), AI system operators (human oversight of outputs), operational management (business metrics), compliance/legal/DPO (regulatory posture), and executive leadership (strategic oversight).
EUR 15 million or 3 per cent of global annual turnover for each of the eight Article 26 obligations. Most likely triggers: failure to conduct FRIA, inadequate human oversight, failure to report incidents, and use outside documented purpose.
The Legal and Regulatory Advisor reassesses the organisation's status against all three criteria when the system is first put into service, annually, whenever the deployment context changes, whenever the provider updates the system, and whenever the organisation modifies its own use. Each assessment is documented with reasoning and conclusion.
Where the provider has not supplied disaggregated fairness data, the deployer can generate it from operational data. The deployer's own records of outputs, combined with demographic information obtained in GDPR compliance, provide the basis for a deployer-side fairness analysis. The FRIA and DPIA should be conducted in parallel and cross-referenced, maintained as separate documents addressing different dimensions.
Break-glass procedures enable any person at Level 2 or above to stop the system immediately when they believe harm is occurring, without requiring senior approval. The governance policy protects anyone who triggers break-glass in good faith. Procedures are tested annually through simulated exercises. An escalation-without-reprisal culture extends whistleblower protection to AI compliance concerns, with confidential and anonymous reporting channels.
For organisations managing multiple AI systems, a portfolio register tracks each system, its provider, deployment date, risk tier, FRIA status, human oversight assignments, next review date, and current compliance status. Cross-system risk analysis identifies patterns and shared vulnerabilities across the portfolio. Portfolio reporting aggregates compliance status for executive leadership.