We use cookies to improve your experience and analyse site traffic.
Post-market monitoring findings only deliver value when they translate into documented system changes. This page describes the operational mechanism that connects PMM findings to decisions, engineering actions, validation, and AISDP updates under Article 72.
Post-market monitoring findings only reduce risk when they translate into concrete system changes.
Post-market monitoring findings only reduce risk when they translate into concrete system changes. The feedback loop connects monitoring outputs to the risk management system, the development cycle, and the ai system description product dossier. Each cycle follows a traceable path: a PMM finding triggers a decision, an authorised action, a validation gate, an AISDP update, and an evidence pack entry. Without this operational mechanism, findings accumulate in dashboards and reports without producing improvements. The loop's value depends entirely on its execution, making defined roles, escalation tiers, and documentation requirements essential for high risk ai system providers who must demonstrate ongoing compliance to a competent authority.
Every PMM finding that warrants action must pass through a decision process determining what action is taken, who authorises it, and when it is executed.
Every PMM finding that warrants action must pass through a decision process determining what action is taken, who authorises it, and when it is executed. Without a defined decision authority, findings stall without translating into system improvements. The authority should be tiered by the impact of the proposed change, ensuring that routine adjustments move quickly while significant modifications receive appropriate governance scrutiny.
Threshold adjustments, where warning thresholds are tuned based on operational experience, can be authorised by the technical sme and recorded in the PMM review minutes. Model retraining on updated data, where the retraining follows the documented pipeline and the retrained model passes all validation gates, should be authorised by the Technical Owner with notice to the ai governance lead. Changes to the model architecture, hyperparameters, or feature set require AI Governance Lead approval and a substantial change assessment.
System suspension or withdrawal requires AI Governance Lead sign-off, with immediate notice to the Legal and Regulatory Advisor and all affected deployers.
PMM-triggered remediation competes with feature development, bug fixes, and other engineering priorities.
PMM-triggered remediation competes with feature development, bug fixes, and other engineering priorities. Without a defined prioritisation mechanism, PMM actions are perpetually deprioritised in favour of business-driven work and the feedback loop stalls. Organisations should establish a PMM action backlog separate from the general engineering backlog, with its own prioritisation criteria.
Critical PMM actions, such as compliance threshold breaches and serious incident corrective actions, should override all other engineering work. Warning-level PMM actions are scheduled by the Technical Owner within the next development sprint. Informational PMM findings are reviewed by the AI Governance Lead at the quarterly PMM meeting and scheduled as capacity permits.
The AI Governance Lead should have the authority to elevate PMM action priority when engineering prioritisation is inconsistent with the compliance risk. This authority must be documented in the quality management system and understood by the engineering leadership.
Every completed feedback loop cycle is documented by the Technical SME as a single traceable record in the evidence pack.
Every completed feedback loop cycle is documented by the Technical SME as a single traceable record in the evidence pack. The record captures the full chain from finding to resolution, providing an auditable trail for competent authority review.
The record should capture: the originating PMM finding, whether an alert, a report, or deployer feedback; the decision taken, including the authorising person and the date; and the action implemented, whether a code change, a retrain, a configuration update, or a threshold adjustment. It should also capture the validation result, showing the retrained model's performance against the validation gates; and the AISDP update, specifying the modules and sections modified and the new version number.
This documentation serves two purposes. It demonstrates to a competent authority that the PMM system is functioning as a closed loop: findings lead to actions, actions lead to improvements, and improvements are documented. It also provides version control traceability, linking each AISDP version change to the PMM finding that motivated it.
The Technical SME monitors the feedback loop itself using meta-metrics that reveal operational health.
The Technical SME monitors the feedback loop itself using meta-metrics that reveal operational health. These metrics provide the AI Governance Lead with visibility into the PMM system's operational effectiveness, enabling informed decisions about process improvements.
Key metrics include: time from PMM finding to decision, measuring how quickly the team responds to monitoring signals; time from decision to completed fix, measuring how quickly engineering executes the remedy; the share of PMM findings that result in a system change versus those documented and accepted as within tolerance; and the share of fixes that successfully resolve the originating finding versus those requiring further work.
These meta-metrics matter because a feedback loop with a median response time of six months is materially different from one with a median response time of two weeks. The difference directly affects the organisation's ability to maintain compliance under article 72.
Article 72 establishes the regulatory basis for post-market monitoring obligations that the feedback loop operationalises.
Article 72 establishes the regulatory basis for post-market monitoring obligations that the feedback loop operationalises. Providers of high-risk AI systems must demonstrate that their PMM system functions as a closed loop where findings produce documented improvements. A competent authority assessing compliance will examine whether the organisation can show traceable connections between monitoring findings, authorised decisions, implemented changes, and updated documentation. The feedback loop mechanism described in this page provides the operational structure needed to satisfy that examination.
The Technical Owner authorises retraining on updated data, provided the retraining follows the documented pipeline and the retrained model passes all validation gates. The AI Governance Lead receives notice.
The AI Governance Lead should have documented authority to elevate the priority of PMM actions when engineering prioritisation is inconsistent with the compliance risk. This authority must be recorded in the quality management system.
Each evidence pack record links the AISDP version change to the PMM finding that motivated it, providing version control traceability demonstrating a closed loop to a competent authority.
A single traceable record captures the originating finding, decision with authoriser and date, implemented action, validation result, and AISDP update with version number.
Key metrics include time from finding to decision, time from decision to fix, share of findings resulting in changes, and share of fixes that resolve the originating finding.