We use cookies to improve your experience and analyse site traffic.
The deployer compliance record contains eight modules with defined review frequencies. A five-level oversight pyramid from technical monitoring through executive leadership provides the human and organisational framework for operational compliance. Automation bias, intent drift, and regulatory change are the key ongoing risks deployers must manage.
A deployer that does not trigger Article 25 provider status prepares a deployer compliance record documenting its own obligations in an eight-module structure.
A deployer that does not trigger Article 25 provider status prepares a deployer compliance record documenting its own obligations in an eight-module structure. Each module maps a specific deployer responsibility to a discrete documentation area.
Module D1 covers system identification and provider reference: the system's name, version, and provider identity; the EU database registration reference; confirmation that the Declaration of Conformity has been received and reviewed; the date the system was put into service; and the authorised representative's contact details where the provider is established outside the EU.
Module D2 documents the intended purpose and deployment context: the deployer's specific use described in sufficient detail to confirm it falls within the provider's documented intended purpose; the operational domain; the categories of decisions supported; the geographic scope; the categories of affected persons; and any configurations or customisations with assessment of whether each remains within the provider's operating parameters.
Module D3 contains the full Fundamental Rights Impact Assessment under Article 27, following the FRIA methodology. This includes stakeholder consultation records, intersectional analysis, market surveillance authority notification and publication record for public authorities, and supplementary controls where the provider's mitigations are insufficient for the deployer's context.
Module D4 documents human oversight arrangements: the deployer's oversight structure mapped to the Instructions for Use, identity and qualifications of oversight personnel, training and certification records, override and escalation procedures, break-glass procedures, and records of break-glass tests.
Module D5 covers monitoring and incident response in comprehensive detail: monitoring activities aligned to the IFU including the specific metrics tracked, their thresholds, the review frequency for each metric, and the persons responsible for each review activity; log retention arrangements under Article 26(6) documenting how automatically generated logs are stored with a minimum retention period of six months, the storage and access controls applied, and the retention schedule and deletion procedures; the internal incident register tracking every event assessed against the serious incident criteria; the triage process for classifying events by severity; the escalation path to the provider for incident notification; procedures for reporting serious incidents under Article 73 including the tiered timelines; and all deployer-provider communications as a documented record.
Module D6 addresses data protection comprehensively: the Data Protection Impact Assessment where required under GDPR Article 35, the lawful basis for processing personal data through the AI system, the data processing agreement with the provider governing how data flows between the deployer and provider, and data subject rights procedures specific to AI-generated decisions. These procedures must address both the AI Act's Article 86 right to explanation, which requires transparency about the role of the AI system in decisions affecting individuals, and GDPR Article 22 rights regarding automated individual decision-making.
Module D7 applies to public authority deployers and Union institutions only, covering the Article 49(3) registration record and the FRIA summary and DPIA summary submitted to the EU database. This registration must be completed before the system is put into service. Module D8 documents compliance review and maintenance: a quarterly review schedule ensuring the compliance record remains current, a process for assessing whether context changes affect the risk profile documented in the FRIA, the Article 25 reassessment process that evaluates whether the deployer's relationship with the system has changed in ways that might trigger provider status, and a version history with change logs recording every material modification to the compliance record.
Technical controls are effective only if the people operating the system use them correctly.
Technical controls are effective only if the people operating the system use them correctly. An AI system that passes its conformity assessment may still cause harm in production. Data distributions shift. User behaviour evolves. Deployer configurations drift. Operators develop automation bias. Business pressures incentivise ignoring warning signals. Operational oversight is the deployer's defence against these realities, providing the human and organisational dimension of deployer compliance that technical monitoring alone cannot address.
The deployer oversight pyramid organises this defence across five levels, each with distinct responsibilities, capabilities, and escalation authorities. Each level is staffed by a different function within the deploying organisation, ensuring that technical, operational, compliance, and strategic perspectives are all represented. The pyramid design ensures that every dimension of the system's behaviour is observed by someone with the capability and authority to act on what they observe. Each level escalates to the next when an issue exceeds its authority or expertise, with defined escalation triggers at every level ensuring that problems are routed correctly without delay.
The five levels progress from automated technical monitoring at Level 1, through human oversight of individual outputs at Level 2, business-level alignment monitoring at Level 3, compliance and regulatory assessment at Level 4, to strategic decision-making at Level 5. This layered approach ensures that a failure at any single level does not leave the system unmonitored: even if an operator fails to catch a problematic output at Level 2, the pattern may be detected through complaint analysis at Level 3 or through PMM metric breach at Level 4.
Level 1 technical monitoring is staffed by IT operations, platform engineers, or the team managing the system's infrastructure.
Level 1 technical monitoring is staffed by IT operations, platform engineers, or the team managing the system's infrastructure. Their function is continuous monitoring of system availability, response times, error rates, and log integrity. They require real-time visibility into operational metrics through dashboards, automated alerting for threshold breaches that triggers notification without requiring constant human observation, access to the logging infrastructure for investigation when alerts fire, and authority to escalate infrastructure issues to the provider without requiring prior internal approval. Escalation triggers include system unavailability or degraded response times, error rate spikes indicating potential infrastructure or model problems, log generation failures that create blind spots in the compliance record, and unexpected behaviour changes following a provider-initiated system update.
Level 2 AI system operators are the human operators who interact with the system's outputs in daily operation: recruiters using a screening tool, credit analysts reviewing model recommendations, clinicians reviewing diagnostic suggestions, case workers assessing eligibility determinations, and claims assessors reviewing fraud detection outputs. Their function is real-time human oversight of the system's outputs, exercising override, intervention, and escalation capabilities.
Operators require training and certification on the system's capabilities, limitations, and failure modes before they begin operating the system. They must understand the meaning of confidence indicators and explanation outputs. They must know when and how to override the system's recommendations, with the override capability always available and low-friction. They must have a clear escalation pathway for reporting concerns. They must be able to recognise patterns suggesting the system is behaving differently from its documented purpose.
The deployer compliance record is a living document, not a one-time compliance exercise.
The deployer compliance record is a living document, not a one-time compliance exercise. The quarterly review schedule in Module D8 ensures that the record remains current as the deployment context evolves, the provider updates the system, the regulatory landscape changes, and operational experience accumulates.
At each quarterly review, the deployer assesses whether any context changes affect the risk profile documented in the FRIA. A change in the population served, a modification to the deployer's operational procedures, an expansion to new geographic regions, or a shift in the decision context may all require FRIA review. The Article 25 reassessment evaluates whether the deployer's relationship with the system has changed in ways that might trigger provider status, particularly if the deployer has made configuration changes, extended the use beyond the documented intended purpose, or modified the system's integration in ways that could constitute a substantial modification.
The Article 25 reassessment is particularly important at each quarterly review. The deployer evaluates whether any of the three provider-escalation triggers have been activated: has the system been rebranded under the deployer's own commercial identity, has a substantial modification been made to the system, or is the system being used for a purpose outside the provider's documented intended purpose. Each assessment is documented with the reasoning and conclusion, creating an audit trail that demonstrates ongoing awareness of the provider-deployer boundary.
The version history in Module D8 records every material change to the compliance record with the date, the nature of the change, and the trigger that prompted it. This version trail demonstrates to a competent authority that the deployer maintains active compliance awareness rather than treating the compliance record as a static document filed at deployment and never revisited. A deployer who can produce a well-maintained, continuously updated compliance record with a clear version history is in a materially stronger position during a regulatory inspection than one who can only produce the initial document from deployment date.
Because data distributions shift, user behaviour evolves, configurations drift, and operators develop automation bias after deployment. Ongoing operational oversight is the deployer's defence against these realities.
Below 2% suggests automation bias where operators accept outputs without scrutiny. Above 30% suggests poor system calibration where the system's recommendations are frequently wrong.
Review frequencies vary by module: monthly for monitoring (D5), quarterly for deployment context (D2), human oversight (D4), and compliance review (D8), annually for FRIA (D3) and data protection (D6).
The gradual divergence of actual system use from documented intended purpose through operator workarounds, new case types, and business unit extensions without governance review.
The automation bias risk is the most significant Level 2 concern. Operators develop a tendency to accept outputs without adequate scrutiny, particularly as they become accustomed to the system over time. Four countermeasures address this risk: requiring operators to form an independent assessment before viewing the recommendation through delayed display of the system's output, inserting calibration cases where the recommendation is known to be incorrect to test whether operators exercise genuine independent review, monitoring review dwell time to identify operators consistently reviewing cases in under five seconds for decisions requiring substantive analysis, and structuring performance incentives to reward thorough review rather than throughput.
Escalation triggers include outputs inconsistent with the operator's professional judgement, outputs appearing to disadvantage a particular group of affected persons, situations the system was not designed to handle such as novel input types or unusual circumstances, and any case where the operator believes the system may be causing harm.
Level 3 operational management is staffed by line managers, team leaders, and operational directors. Their function is oversight of alignment with operational objectives, complaint handling, and affected person experience. They require access to business-level metrics including complaint volumes, appeal rates, override rates per operator, and affected person feedback, with the ability to interpret these metrics in the context of the FRIA. The most important function at this level is intent drift detection: operators may develop workarounds that circumvent the documented process, new case types may emerge that the system was not designed to handle, and business units may extend the system's use to new contexts without governance review. Operational management must recognise these patterns and escalate them before they become compliance issues. Escalation triggers include trends in complaints suggesting systematic unfairness, override rates persistently very low indicating automation bias or very high indicating poor calibration, and any use outside the documented intended purpose.
Level 4 compliance, legal, and data protection oversight is staffed by the AI Governance Lead, Legal and Regulatory Advisor, and Data Protection Officer. Their function is oversight of compliance posture, regulatory risk, and legal obligations. They require regular reporting from Levels 1 through 3 and the ability to interpret reports in the regulatory context, assessing whether observed issues constitute non-compliance. The Legal and Regulatory Advisor conducts regulatory horizon scanning, monitoring guidance from the AI Office, enforcement actions taken by national competent authorities, harmonised standards developments, and legislative amendments. Each development is assessed for its impact on the deployer's systems. Escalation triggers include any escalation from Levels 1 through 3 that may constitute a regulatory breach, monitoring data suggesting the system no longer operates within documented parameters, and external events affecting the compliance posture.
Level 5 executive leadership provides strategic oversight, resource allocation, risk appetite setting, and culture setting. Executive leadership receives periodic reporting quarterly during normal operations and immediately for serious incidents, covering compliance status of all systems, open issues, serious incidents, FRIA status, and overall risk posture. Escalation triggers include serious incidents under Article 73, unresolved compliance issues, resource constraints affecting oversight capability, and decisions to suspend or cease use of the system.