We use cookies to improve your experience and analyse site traffic.
After mitigations are applied, every risk retains a residual rating that must be formally assessed for acceptability. Article 9(4) requires risks to be reduced 'as far as possible through adequate design and development,' a standard that demands evidence of diligent pursuit of risk reduction rather than zero residual risk. This page covers how to operationalise that standard, who decides acceptability, and how residual risk is communicated and reassessed.
Article 9(4) requires that risks be eliminated or reduced "as far as possible through adequate design and development.
Article 9(4) requires that risks be eliminated or reduced "as far as possible through adequate design and development." This standard does not require zero residual risk. It requires evidence that the organisation has pursued risk reduction to the point where further reduction would be disproportionate, technically infeasible, or counterproductive. Operationalising this standard requires a structured process.
For each risk above the acceptance threshold, the assessor documents the mitigations already implemented, the residual risk rating after those mitigations, the alternative mitigations that were considered and rejected, and the rationale for rejection. That rationale must address cost, technical feasibility, and any adverse effects the alternative mitigation would introduce.
The rationale for rejection must be specific. "Too expensive" is insufficient; the organisation must explain the cost relative to the system's economic value and the severity of the risk. Any claim of "not technically feasible" requires the Technical SME to provide supporting evidence that the alternative was investigated, not merely dismissed. "Would degrade performance" must quantify the degradation and explain why the current performance level is necessary for the system's intended purpose. For the broader risk management framework, see AI Risk Assessment and Management.
Residual risk acceptance is a governance decision, not a technical one.
Residual risk acceptance is a governance decision, not a technical one. The AI Governance Lead reviews the documentation for each risk above the threshold and determines whether the residual risk is acceptable. The determination is recorded as a formal risk acceptance decision, signed by the AI Governance Lead, and retained in the AISDP evidence pack.
Each acceptance confirms that the residual risk is proportionate to the system's benefits and that no reasonably practicable further mitigation is available. These sign-offs carry significant weight during conformity assessment. A notified body will examine whether the sign-off authority had sufficient information and whether the rationale demonstrates genuine engagement with alternatives rather than a rubber-stamp approval.
The separation between technical assessment and governance approval is deliberate. Engineers assess what mitigations are possible and their effects. The AI Governance Lead decides whether the remaining risk, given all available evidence, is acceptable for the organisation and its affected persons.
The AI Governance Lead communicates the residual risk assessment to deployers through the Instructions for Use (Module 8) and subjects it to continuous monitoring through the post-market monitoring plan (Module 12).
The AI Governance Lead communicates the residual risk assessment to deployers through the Instructions for Use (Module 8) and subjects it to continuous monitoring through the post-market monitoring plan (Module 12). Deployers must understand which residual risks they are inheriting and what compensating controls they must implement.
The communication must be specific. Stating that "the system has residual fairness risk" is inadequate. The deployer must know which subgroups are affected, the magnitude of the risk, the conditions under which the risk is most likely to materialise, and the compensating controls that the deployer should apply. Vague residual risk disclosures transfer risk to the deployer without giving them the information needed to manage it.
This specificity also protects the provider. A deployer who experiences a foreseeable harm and can demonstrate that the provider's residual risk communication was inadequate may have grounds for claiming the provider failed to fulfil its Article 13 transparency obligations.
Residual risk acceptability changes over time.
Residual risk acceptability changes over time. A risk that was acceptable at deployment may become unacceptable under several circumstances. The deployment scale may increase, exposing more affected persons. The affected population may change, for instance when the system is deployed in a new jurisdiction with different demographic composition.
New evidence may emerge, such as academic research identifying a failure mode that was not anticipated. The regulatory standard may tighten, such as new guidance from the AI Office raising the bar for a specific risk category. The quarterly risk register review must re-assess residual risk acceptability, not merely confirm that the original assessment remains on file.
Each reassessment applies the same rigour as the initial acceptance decision: updated evidence, consideration of new alternatives, and AI Governance Lead sign-off. For the full iterative review cadence, see Iterative Risk Management.
Different AI system types present different residual risk profiles, and organisations should calibrate their acceptability thresholds accordingly.
Different AI system types present different residual risk profiles, and organisations should calibrate their acceptability thresholds accordingly.
For classification and ranking systems such as recruitment screening or credit scoring, the primary residual risks relate to discriminatory outcomes, proxy variable effects, and feedback loops that amplify historical bias. The residual risk assessment must include disaggregated fairness analysis across protected characteristics and an intersectional vulnerability assessment. For the methodology, see Fundamental Rights Impact Assessment.
For generative AI systems, the residual risk profile shifts towards content safety, hallucination, intellectual property infringement, and the potential for misuse in creating synthetic media. The risk register must address both intended and adversarial use cases. For systems embedded in physical products (safety components within machinery, vehicles, or medical devices), the AI-specific residual risk assessment supplements the product-level FMEA under Annex I legislation; it does not replace it. For real-time decision systems (autonomous vehicles, medical diagnostic tools, critical infrastructure controls), latency and reliability residual risks take on heightened significance, requiring worst-case failure scenario modelling including cascading failures and failsafe activation within documented response times.
The AISDP evidence pack for each residual risk acceptance must contain four elements.
The AISDP evidence pack for each residual risk acceptance must contain four elements. First, the mitigations implemented and their measured effect on the risk rating. Second, the alternative mitigations considered, with specific rationale for rejection addressing cost, feasibility, and adverse effects. Third, the AI Governance Lead's signed acceptance decision with rationale. Fourth, the deployer communication in Module 8 specifying which subgroups are affected, the risk magnitude, materialisation conditions, and recommended compensating controls.
This documentation is examined closely during conformity assessment. A notified body will check whether alternatives were genuinely investigated (not merely listed), whether rejection rationales are substantive (not generic), and whether the governance sign-off demonstrates informed judgement. The evidence must tell a coherent story: the organisation identified the risk, pursued reduction diligently, reached a point of diminishing returns, and accepted the remaining risk with full awareness of its implications. For how these residual risk decisions feed into the scoring framework, see Risk Scoring and Calibration.
No. The organisation must explain the cost relative to the system's economic value and the severity of the risk. Claims of technical infeasibility require the Technical SME to provide evidence that alternatives were investigated. Claims of performance degradation must quantify the impact.
The quarterly risk register review must reassess acceptability in light of changed circumstances. A risk may become unacceptable due to increased deployment scale, changed affected populations, new evidence, or tightened regulatory standards. New mitigations must then be identified and implemented.
Deployers inherit certain residual risks and must implement compensating controls. The provider's Instructions for Use must give deployers enough specific information to manage those risks. Inadequate disclosure may constitute a failure to meet Article 13 transparency obligations.
At every quarterly risk register review, and whenever deployment scale increases, affected populations change, new evidence emerges, or regulatory standards tighten.
Classification systems face fairness and bias residual risks, generative AI faces content safety and hallucination risks, embedded systems integrate with product-level FMEA, and real-time systems face heightened latency and reliability risks.