We use cookies to improve your experience and analyse site traffic.
Article 9 of the EU AI Act requires continuous iteration of the risk management system. This cluster explains how organisations govern risk register reviews, define trigger events for immediate reassessment, select implementation platforms, and translate findings into AISDP Module 6.
Article 9 requires continuous iteration of the risk management system throughout the AI lifecycle.
Article 9 requires continuous iteration of the risk management system throughout the AI lifecycle. In practice, the AI System Assessor reviews the risk register at defined intervals, after every substantial modification, whenever post-market monitoring data reveals new risk indicators, and whenever external factors change. Quarterly review is common for high-risk systems, but the regulation demands responsiveness beyond scheduled cycles. New guidance from the AI Office, relevant enforcement actions, and changes in the deployment context all require the risk management system to adapt.
A risk management system that produces a thorough initial assessment and then sits dormant until the next annual review will fail regulatory scrutiny. The word "continuous" in Article 9 is the part most organisations underestimate. The risk register is also the most heavily scrutinised artefact in the AISDP evidence pack. It must demonstrate two things simultaneously: that the organisation identified the right risks, and that the register evolved over time in response to new information. A static register, one that looks the same at the end of the first year as it did at the beginning, suggests the organisation is not learning from its own monitoring data.
The quarterly risk register review is a governance event, not merely a technical review.
The quarterly risk register review is a governance event, not merely a technical review. The AI Governance Lead convenes the review and ensures attendance from the Technical SME, Legal and Regulatory Advisor, and any other stakeholder with relevant information. Each review follows a structured agenda covering new risks identified since the last review from post-market monitoring data, operator escalations, deployer feedback, horizon scanning, or internal testing.
The agenda also addresses changes to existing risk ratings supported by evidence, status of mitigation actions whether on track, delayed, or blocked, and reassessment of residual risk acceptability in light of any changes. External developments that affect the risk profile are reviewed as a standing item. This governance structure ensures that risk management remains a cross-functional discipline rather than a purely technical exercise.
Certain events should trigger an immediate risk reassessment outside the quarterly review cycle.
Certain events should trigger an immediate risk reassessment outside the quarterly review cycle. These triggers include a serious incident, a substantial modification determination, and a competent authority inquiry or enforcement action against the organisation or a comparable system. Publication of new harmonised standards or AI Office guidance that affects the system's risk domain also constitutes a trigger event.
A significant change in the deployment context, such as new deployer populations, new jurisdictions, or material changes in the affected person population, requires immediate reassessment. The AI Governance Lead documents the trigger mechanism in the quality management system and communicates it to all stakeholders who might observe a trigger event. Every person in the organisation who might observe a trigger, whether a serious incident, a regulatory development, or a deployer complaint, must know the trigger mechanism and how to activate it.
The AI System Assessor documents each review with the review date, attendees, findings, changes to risk ratings, and any new mitigations adopted. The iterative nature of the risk management system should be demonstrable through the version history of the risk register, showing how it has evolved in response to new information.
The NIST AI RMF provides the strongest methodological foundation for continuous risk management.
The NIST AI RMF provides the strongest methodological foundation for continuous risk management. Its four functions, Govern, Map, Measure, and Manage, align naturally with the Article 9 lifecycle. In practice, the approach is to embed each function into an existing organisational process. Governance goes into the AI governance committee's standing agenda. Mapping goes into every design review and change request. Measurement goes into the CI/CD pipeline's quality gates. Management goes into the quarterly review cycle.
ISO/IEC 23894:2023 provides a complementary perspective. Where NIST is broad and principles-based, ISO 23894 is more prescriptive about integration points with the development lifecycle. It defines specific review triggers including change in intended purpose, change in operational domain, new data source, and new model version. These triggers align well with the AISDP substantial modification thresholds. For organisations already ISO-aligned, adopting 23894 as the risk management system framework reduces the translation effort.
Bow-tie analysis is valuable specifically for the conformity assessment audience. A bow-tie diagram for a given risk shows the threat event at the centre, the causes and preventive controls on the left, and the consequences and mitigative controls on the right. This gives an assessor a single-page view of the complete control landscape for that risk. For a high-risk AI system, the top ten risks should each have a bow-tie diagram in the AISDP evidence pack. The diagrams also reveal gaps where a cause has no preventive control or a consequence has no mitigative control.
Versioning is the mechanism that demonstrates the register's evolution over time.
Versioning is the mechanism that demonstrates the register's evolution over time. Every change, whether a new risk added, a rating changed, a mitigation updated, or a risk closed, must create a new version with a timestamp, the identity of the person who made the change, and a rationale. The version history becomes the narrative of the organisation's risk management journey.
An assessor reviewing the register should be able to trace from version to version and see the organisation responding to events. A post-market monitoring alert triggered a risk rating increase. A new mitigation reduced a residual risk score. A horizon scanning exercise added a risk that had not previously been considered. This traceability is essential for demonstrating continuous compliance during conformity assessment.
Three implementation patterns are common for maintaining iterative risk registers.
Three implementation patterns are common for maintaining iterative risk registers. The first is a dedicated GRC platform such as OneTrust, ServiceNow, or Archer, providing structured risk entry forms, automated workflow for review and approval, scheduled review reminders, and audit trails. If the organisation already runs OneTrust for GDPR, extending it to AI governance avoids a parallel system.
The second pattern is an adapted project management tool such as Jira or Azure DevOps, where risk entries become custom issue types with fields for likelihood, impact dimensions, composite score, mitigation description, residual rating, and owner. This pattern keeps risk management visible alongside development work.
The third is a Git-based register, where each risk is a YAML or JSON file in a dedicated repository, with changes flowing through pull requests. This is the lightest-weight option with the strongest version history, though it lacks workflow automation. The AI Governance Lead handles review reminders and escalations externally through GitHub Actions or calendar integrations. AI-specific platforms such as Credo AI and Holistic AI offer tighter integration with ML workflows but require standing up a new platform.
A risk register does not require GRC software to satisfy Article 9.
A risk register does not require GRC software to satisfy Article 9. A version-controlled spreadsheet stored in a shared drive with filename-based versioning satisfies the requirement. Each quarterly review produces a new version, and the previous version is retained unmodified. The review record covering the date, attendees, findings, and changes is documented in a separate meeting minutes file stored alongside the register.
To make this work, the organisation needs a filename convention enforcing both version and date, along with a shared storage location where write access is restricted to the AI Governance Lead and their delegate. A meeting minutes template should include mandatory fields: date, attendees, each risk reviewed, rating changes with rationale, new risks identified, and mitigations updated. All versions are retained by the Conformity Assessment Coordinator for ten years, with no deletion permitted without AI Governance Lead approval.
The trade-off is the loss of automated workflow features such as reminders, approval chains, and status tracking. The team must manage review scheduling and version discipline manually.
The AI System Assessor documents the risk assessment findings and decisions in AISDP Module 6, the Risk Management System module.
The AI System Assessor documents the risk assessment findings and decisions in AISDP Module 6, the Risk Management System module. The translation is not a copy-paste exercise. Module 6 must present the risk management system in a form that a competent authority or notified body can evaluate against Article 9.
Module 6 must contain a description of the risk management methodology, covering which methods were used, how risks were scored, and who participated. It must contain the risk register itself, with each risk accompanied by its identification source, initial rating, mitigations, and residual rating. It must also contain the residual risk acceptance decisions, each with the AI Governance Lead's rationale and signature, along with the risk register version history demonstrating iterative review.
Cross-references to the AISDP modules where each risk's mitigations are documented are essential. A fairness risk mitigated through data governance cross-references Module 4. A cybersecurity risk mitigated through access controls cross-references Module 9. The evidence pack for Module 6 must include the FMEA worksheets, the stakeholder consultation records, the red-teaming reports, the horizon scanning summaries, the risk scoring calibration records, and the quarterly review meeting minutes. The AI System Assessor references each artefact by its evidence register identifier for rapid retrieval during an assessment.
Quarterly review is common for high-risk systems, but additional reviews must be triggered by serious incidents, substantial modifications, regulatory changes, and significant changes in the deployment context.
Yes. A version-controlled spreadsheet with filename-based versioning satisfies the requirement, provided each quarterly review produces a new version, previous versions are retained unmodified, and review records are documented alongside the register.
Bow-tie analysis provides a single-page view of the complete control landscape for a given risk, showing causes and preventive controls on one side and consequences and mitigative controls on the other. For high-risk AI systems, the top ten risks should each have a bow-tie diagram in the AISDP evidence pack.
Triggers include serious incidents, substantial modifications, competent authority actions, new harmonised standards, and significant changes in the deployment context such as new jurisdictions or deployer populations.
NIST AI RMF provides a broad, principles-based foundation with four functions that map to the Article 9 lifecycle, while ISO/IEC 23894:2023 offers more prescriptive integration points with the development lifecycle.
Three patterns are common: dedicated GRC platforms like OneTrust, adapted project management tools like Jira, and Git-based registers where each risk is a versioned file with changes flowing through pull requests.
Module 6 must present the methodology, risk register with ratings and mitigations, residual risk acceptance decisions, version history, and cross-references to other AISDP modules where mitigations are documented.