We use cookies to improve your experience and analyse site traffic.
Systems incorporating general-purpose AI models present a distinctive risk assessment challenge: the downstream provider bears full compliance responsibility under Article 25 but has limited visibility into the GPAI model's training data, architecture, and behaviour. This page covers how to identify inherited risks, exercise Article 25(3) information rights, build compensating controls for information gaps, and leverage the GPAI Code of Practice disclosures across the AISDP.
Systems incorporating general-purpose AI models present a risk assessment challenge that other AI systems do not.
Systems incorporating general-purpose AI models present a risk assessment challenge that other AI systems do not. Full responsibility for compliance under Article 25 falls on the provider of the high-risk system, yet that provider has limited visibility into the GPAI model's training data, architecture, and behavioural characteristics. The downstream provider must assess risks it cannot fully observe.
This information asymmetry is structural, not incidental. GPAI providers typically cite commercial confidentiality or intellectual property concerns when declining to share detailed technical documentation. Article 53 requires GPAI model providers to make certain technical documentation available, and Article 25(3) creates a framework for information sharing between GPAI providers and downstream high-risk system providers. But the practical reality is that many providers have been reluctant to comply fully.
The risk assessment must therefore operate on two tracks simultaneously: extracting maximum information from the GPAI provider through formal channels, and building compensating controls that close the gaps where information is unavailable. For the broader risk management framework, see AI Risk Assessment and Management.
The risk assessment must identify three categories of inherited risk.
The risk assessment must identify three categories of inherited risk. Training data risks include bias embedded in the pre-training corpus, personal data processed without consent, and copyrighted material in the training data. Behavioural risks include hallucination, output instability, sensitivity to prompt phrasing, and emergent capabilities that were not anticipated by the model provider.
Supply chain risks arise when the GPAI provider discontinues the model, changes its behaviour through updates, or restricts access. Some providers apply updates within a single version identifier that can alter output distributions without advance notice. The downstream provider's risk register must account for the possibility that the model's behaviour changes between assessments.
Each inherited risk requires a determination of whether the downstream system's architecture mitigates or amplifies it. A GPAI provider's disclosure that its model performs less reliably on certain languages or demographic groups becomes a fairness risk entry in the downstream system's risk register if the intended deployment population includes those groups.
Article 25(3) entitles a provider of a high-risk AI system to request specific information from the GPAI model provider.
Article 25(3) entitles a provider of a high-risk AI system to request specific information from the GPAI model provider. The AI System Assessor should submit a structured information request covering five categories.
Training data governance covers sources, collection methodology, geographic and demographic coverage, temporal scope, known biases, copyright compliance measures under Article 53(1)(c), and opt-out mechanisms. Metadata, distributional statistics, and provenance documentation are sufficient; the request need not demand the training data itself.
Model architecture and behaviour covers architecture family, parameter count, training methodology, alignment and safety tuning approach, known failure modes, output space constraints, and documented behavioural limitations. For models classified as systemic risk under Article 51, the assessor should request access to model evaluation results.
Versioning and change policy covers the versioning scheme, deprecation policy, change notification commitments, and the scope of changes that may occur within a single version identifier. Data handling practices covers inference input retention, training data feedback loops, data processing agreements, and data residency commitments. Safety and security covers red-teaming methodology and results, vulnerability disclosure policy, and incident response capabilities.
The assessor documents each category requested, the date, the provider's response, and the residual risk arising from any gaps. Where the provider refuses necessary information, the assessor records the refusal and escalates to the AI Governance Lead for a residual risk acceptance decision. A pattern of provider non-disclosure may itself be a risk factor that influences the model selection decision. For how residual risks are managed, see .
Where the GPAI provider's disclosures are insufficient for a complete risk assessment, the organisation must implement compensating controls.
Where the GPAI provider's disclosures are insufficient for a complete risk assessment, the organisation must implement compensating controls. These include systematic behavioural testing against a sentinel dataset that exercises the risk dimensions the training data disclosures do not address. The sentinel dataset must be designed to probe known failure modes such as demographic bias, factual accuracy, and output consistency.
Output filtering and validation layers constrain the GPAI model's outputs to the acceptable range for the system's intended purpose. These layers act as guardrails, catching outputs that fall outside expected parameters regardless of why the model produced them.
Continuous monitoring of the GPAI model's behaviour for drift requires automated alerting when the model's output distribution shifts beyond defined tolerances. This is particularly important given that some providers apply updates within a version identifier without notice. The monitoring must detect behavioural changes that affect the downstream system's risk profile. For the risk scoring methodology applied to GPAI-inherited risks, see Risk Scoring and Calibration.
The AI Office Code of Practice for GPAI providers, finalised under Article 56 in early 2025, defines specific transparency and documentation commitments that participating GPAI providers have agreed to meet.
The AI Office Code of Practice for GPAI providers, finalised under Article 56 in early 2025, defines specific transparency and documentation commitments that participating GPAI providers have agreed to meet. These commitments create a baseline expectation for the information that should be available from a compliant provider.
The AI System Assessor should cross-reference the Code of Practice's transparency commitments against the information categories in the Article 25(3) request. Where the GPAI provider participates in the Code of Practice, the downstream provider can reasonably expect compliance with these commitments. Where the provider does not participate, information gaps are likely wider and compensating controls more demanding.
The Code's disclosures inform several aisdp domains directly. The GPAI provider's safety evaluation results and known limitation disclosures feed into the inherited risk analysis in Module 6. Training data transparency supports the data governance documentation in Module 4. Security testing provisions partially satisfy Article 15 robustness requirements for the GPAI layer. Technical documentation supports deployer-facing explanations required by Article 13. Gaps between the provider's disclosures and the downstream system's needs must be filled by the provider's own testing and documentation.
The AI System Assessor should maintain a structured GPAI disclosure register as a standing AISDP artefact.
The AI System Assessor should maintain a structured GPAI disclosure register as a standing AISDP artefact. For each Code of Practice commitment area, the register records the disclosure received (or its absence), the AISDP domain it informs, and the compensating control applied where the disclosure is absent or insufficient.
This register serves as both an audit trail and a change management trigger. When the GPAI provider updates its disclosures (as the Code of Practice requires for material changes), the assessor reviews the register and determines whether any AISDP modules require corresponding updates. Without this tracking mechanism, provider disclosure changes may go unnoticed, leaving the downstream system's risk assessment outdated.
The register also documents contractual risk transfer gaps. Where the GPAI provider's terms of service limit liability or disclaim responsibility for downstream use, the register identifies the resulting gap in risk allocation. The organisation may be bearing risks it assumed the GPAI provider was managing. These contractual gaps are reflected in the residual risk assessment.
Where the integrated GPAI model has been classified as having systemic risk under Article 51, the GPAI provider bears additional obligations under Article 55: model evaluations including adversarial testing, systemic risk assessment and mitigation, serious incident reporting to the AI Office and relevant national competent authorities, and adequate cybersecurity protection.
Where the integrated GPAI model has been classified as having systemic risk under Article 51, the GPAI provider bears additional obligations under Article 55: model evaluations including adversarial testing, systemic risk assessment and mitigation, serious incident reporting to the AI Office and relevant national competent authorities, and adequate cybersecurity protection.
These obligations create both risk and opportunity for the downstream integrator. The risk is that non-compliance by the GPAI provider with its systemic risk obligations creates inherited compliance exposure for the downstream high-risk system. The opportunity is that the provider's compliance artefacts (evaluation reports, adversarial testing results, risk assessments) may reduce the downstream provider's own assessment burden, provided the artefacts are accessible and relevant to the downstream use case.
The AI System Assessor should request access to the GPAI provider's systemic risk documentation, assess its relevance to the downstream system's risk profile, and document which inherited risks are covered by the GPAI provider's own controls and which require additional mitigation at the system level. For the iterative review process that tracks GPAI provider compliance over time, see Iterative Risk Management.
Document the refusal, assess whether compensating controls can mitigate the resulting uncertainty, and escalate to the AI Governance Lead for a residual risk acceptance decision. A pattern of non-disclosure is itself a risk factor for model selection.
No. Metadata, distributional statistics, and provenance documentation are sufficient for the downstream risk assessment. The request should focus on data governance documentation rather than the data itself.
Continuous monitoring of the GPAI model's output distribution against a sentinel dataset, with automated alerting when behaviour shifts beyond defined tolerances. Some providers apply updates within a version identifier without notice.
Only if they are accessible and relevant to your specific use case. The downstream provider must assess whether the provider's evaluations and testing cover the deployment context, and fill gaps through its own red-teaming and testing programme.
Through structured requests covering five categories: training data governance, model architecture and behaviour, versioning and change policy, data handling practices, and safety and security, with each response and gap documented.
Sentinel dataset behavioural testing, output filtering and validation layers constraining outputs to acceptable ranges, and continuous drift monitoring with automated alerting when output distributions shift.
Code disclosures feed directly into Module 6 (risk), Module 4 (data governance), Module 9 (cybersecurity), and Module 8 (transparency). Gaps are attributed to provider non-compliance or Code scope limitations.
Article 55 requires model evaluations, systemic risk mitigation, serious incident reporting, and cybersecurity. Non-compliance creates inherited exposure, but compliance artefacts may reduce the downstream assessment burden.