We use cookies to improve your experience and analyse site traffic.
Article 25(3) of the EU AI Act entitles high-risk AI system providers to request from GPAI model providers the information necessary for regulatory compliance. This statutory right is exercised through a structured information request covering six categories, each mapped to specific AISDP modules.
Article 25(3) gives the provider of a HIGH RISK AI SYSTEM a statutory right to request from the GPAI model provider the information necessary to ensure the high-risk system's compliance with the AI Act.
Article 25(3) gives the provider of a high risk ai system a statutory right to request from the gpai model provider the information necessary to ensure the high-risk system's compliance with the AI Act. This is not a courtesy or a commercial negotiation point; it is a legal entitlement embedded in the regulation. The AI System Assessor exercises this right through a structured information request that covers six defined categories, each mapped to specific modules of the AI System Description Profile.
The structured request must be submitted in writing, specifying the legal basis under Article 25(3), the information categories required, the intended use of the information, and the downstream system's risk classification. Each request is documented in the GPAI disclosure register with the date, content, and recipient, creating an auditable trail from the moment the information need is identified through to the provider's response. GPAI Integration and Downstream Provider Obligations
The structured information request covers six categories, each feeding specific AISDP modules and each carrying characteristic disclosure patterns from GPAI providers.
The structured information request covers six categories, each feeding specific AISDP modules and each carrying characteristic disclosure patterns from GPAI providers. These categories provide the downstream provider with the information architecture needed to assess whether the GPAI model component meets the requirements the AI Act places on the overall high-risk system.
Training data governance covers sources, collection methodology, geographic and demographic coverage, temporal scope, known biases, copyright compliance measures, and opt-out mechanisms. This feeds AISDP Module 4 (Data Governance). GPAI providers typically disclose high-level categories but withhold granular source lists. Copyright compliance statements are generally provided, though representativeness data at subgroup level is rarely available.
Model architecture and behaviour covers architecture family, parameter count, training methodology, alignment approach, known failure modes, output constraints, and behavioural limitations. This feeds AISDP Module 3 (Architecture). Providers usually disclose architecture family and parameter count. Alignment methodology is described at a high level, but detailed failure mode analysis is rarely shared.
Safety and security evaluation covers red-teaming methodology and results, adversarial testing, vulnerability disclosure policy, incident response capabilities, and security certifications. This feeds AISDP Module 9 (Cybersecurity). Disclosure varies widely: some providers publish model cards and safety reports, others share results under NDA, and some refuse substantive disclosure entirely. Risk Management System Design
Versioning and change policy covers version scheme, deprecation policy, change notification commitments, scope of within-version updates, and rollback availability. This feeds AISDP Module 10 (Version Control). This information is usually available in service documentation, though a critical gap remains: many providers apply silent updates within a version identifier without notification.
Data handling practices covers inference data retention, use of inference data for training, data processing agreements, sub-processor disclosure, and data residency. This feeds AISDP Modules 4 and 9. Major providers now publish DPAs and sub-processor lists, though inference data use for training remains a contested area.
Systemic risk documentation covers model evaluation results, risk assessments, and serious incident reports for models classified under Article 51. This feeds AISDP Module 6 (Risk Management). Article 55 obligations took effect in August 2025, meaning the corpus of available systemic risk documentation is still developing.
The downstream provider follows a graduated four-level escalation when a GPAI provider refuses or restricts the information disclosure that Article 25(3) entitles.
The downstream provider follows a graduated four-level escalation when a GPAI provider refuses or restricts the information disclosure that Article 25(3) entitles. Providers typically refuse for three reasons: commercial confidentiality around training data and model architecture, intellectual property protection where safety evaluation results may reveal vulnerabilities, and operational scalability where bespoke disclosures to every integrator are commercially impractical.
Level 1: Negotiate. The AI System Assessor requests the information through the GPAI provider's enterprise or partnership channel. Many providers maintain dedicated compliance teams that can provide information under NDA that is not available through standard API documentation. The Assessor documents the negotiation, including what was requested, what was offered, and what was refused.
Level 2: Escalate to the AI Office. Where the GPAI provider participates in the Code of Practice under Article 56, the downstream provider can refer the provider's non-disclosure to the AI Office. The Code of Practice defines specific transparency commitments, and a participating provider's refusal to honour those commitments is a matter for the AI Office's enforcement powers under Article 88.
Level 3: Implement compensating controls. Where disclosure cannot be obtained through negotiation or escalation, the downstream provider must fill the information gaps through its own testing, evaluation, and monitoring. The compensating control architecture addresses the four dimensions of information asymmetry: training data opacity, behavioural unpredictability, version instability, and security opacity.
The GPAI disclosure register is a structured register maintained by the AI System Assessor as a standing AISDP artefact.
The GPAI disclosure register is a structured register maintained by the AI System Assessor as a standing AISDP artefact. Each row records the Code of Practice commitment area, the information category requested, the disclosure received or its absence, the date of disclosure, the AISDP module it informs, the compensating control applied where the disclosure is absent or insufficient, and the residual risk assessment.
The register functions as both an audit trail and a change management trigger. When the GPAI provider updates its disclosures, the Assessor reviews the register and determines whether any AISDP modules require corresponding updates. When the GPAI provider releases a new model version, the Assessor re-evaluates whether the existing disclosures remain valid for the new version. This version-triggered review is essential because model updates can invalidate previous safety evaluations, behavioural characterisations, and fairness assessments.
The structured format pairs every information gap with a corresponding compensating control and residual risk assessment across each of the six information categories. When a gap appears in the training data governance row, for instance, the register links directly to the behavioural proxy testing and output distribution analysis documented in AISDP Module 4.
The compensating control architecture addresses four dimensions of information asymmetry that arise when GPAI provider disclosures are insufficient: training data opacity, behavioural unpredictability, version instability, and security opacity.
The compensating control architecture addresses four dimensions of information asymmetry that arise when GPAI provider disclosures are insufficient: training data opacity, behavioural unpredictability, version instability, and security opacity. Each dimension requires a distinct set of controls because the nature of the information gap differs.
Training data opacity arises because the downstream provider cannot inspect, audit, or reproduce the GPAI model's training data. Article 10's requirements for training data documentation, completeness, representativeness, and bias assessment cannot be satisfied at the pre-training level. The compensating approach operates at two layers.
Behavioural proxy testing requires the Technical SME to design a sentinel evaluation dataset that exercises the fairness, accuracy, and safety dimensions the training data disclosures do not address. The dataset includes balanced representation across the protected characteristic subgroups relevant to the downstream system's intended purpose, adversarial examples testing for known bias patterns in the domain such as gender bias in recruitment language, racial bias in credit-related terminology, and age bias in clinical descriptions. It also includes edge cases that probe the model's behaviour at the boundaries of its training distribution, and multilingual samples where the system serves non-English populations. The sentinel dataset is versioned, maintained, and re-evaluated whenever the GPAI model version changes. Evaluation results are documented in AISDP Module 4 alongside an explicit statement that the downstream provider could not conduct Article 10 data documentation at the pre-training level and has substituted behavioural proxy testing.
GPAI providers update their models on their own schedule, and these updates can change the model's behaviour in ways that affect the downstream system's compliance posture.
GPAI providers update their models on their own schedule, and these updates can change the model's behaviour in ways that affect the downstream system's compliance posture. Some providers apply changes within a version identifier without notification: a model accessed under the same identifier today may behave differently from the model accessed three months ago.
Three controls address version instability. Version pinning, where the GPAI provider's API supports requesting a specific model checkpoint rather than the latest version, allows the Technical SME to pin the production system to a specific version. Version upgrades are then treated as system changes that trigger the governance pipeline gates and the change classification logic.
Sentinel monitoring maintains a continuous evaluation pipeline that tests the GPAI model's behaviour against the sentinel dataset on a defined schedule, daily for high-risk systems. If the evaluation detects a behavioural shift exceeding defined tolerances, whether output distribution shift, fairness metric change, or hallucination rate increase, an automated alert triggers investigation. The alert is documented in the PMM records (AISDP Module 12) and may trigger the substantial modification assessment.
Contractual notification requirements stipulate that the procurement contract should require the GPAI provider to notify the downstream provider before making changes that could affect the model's behaviour, with a minimum notice period of 30 days recommended for high-risk systems.
The downstream provider cannot audit the GPAI provider's security infrastructure, training pipeline security, or model integrity controls.
The downstream provider cannot audit the GPAI provider's security infrastructure, training pipeline security, or model integrity controls. Article 15's resilience requirements must be satisfied through the downstream system's own security architecture.
Prompt injection defence in depth requires multiple layers of protection. Input sanitisation validates and constrains user inputs before they reach the GPAI model. System prompt isolation separates the system prompt from user content using the provider's recommended separation mechanisms. Output validation verifies that the model's output conforms to expected format and content boundaries. Action gating, for agentic systems, requires explicit approval before the system acts on the model's output. These controls are documented in AISDP Module 9 with test evidence from the red-teaming programme.
Model integrity verification addresses the risk that the model has been tampered with or replaced. Where the GPAI model is accessed via API, the downstream provider cannot directly verify integrity. The sentinel monitoring pipeline provides a behavioural proxy: if the model's behaviour on the sentinel dataset changes unexpectedly, this may indicate an integrity issue as well as a version change. Where the GPAI model is deployed on-premises or in a private cloud, the Technical SME verifies model artefact integrity using cryptographic hashes at deployment time and periodically during operation. Both prompt injection controls and integrity verification measures are documented in AISDP Module 9 as evidence of the downstream system's compliance with Article 15's resilience requirements.
Follow the graduated escalation: first negotiate through enterprise channels under NDA, then escalate to the AI Office if the provider participates in the Code of Practice. If disclosure remains unavailable, implement compensating controls such as behavioural proxy testing and output distribution analysis, and document the residual risk in the AISDP risk register.
When the GPAI provider updates its disclosures or releases a new model version, the AI System Assessor reviews the register to determine whether any AISDP modules require corresponding updates. Version-triggered reviews are essential because model updates can invalidate previous safety evaluations and behavioural characterisations.
Yes. Persistent non-disclosure may itself influence the model selection decision, as an alternative GPAI provider offering greater transparency reduces the downstream provider's compliance risk and the burden of implementing compensating controls.
Through graduated escalation: negotiate via enterprise channels, escalate to the AI Office under Code of Practice commitments, implement compensating controls, and document residual risk.
A structured register maintained by the AI System Assessor recording each information request, response, AISDP module informed, compensating controls applied, and residual risk assessment.
Four dimensions: training data opacity, behavioural unpredictability, version instability, and security opacity, each requiring distinct controls.
GPAI providers update models on their own schedule, sometimes silently, which can change behaviour affecting the downstream system's compliance posture. Controls include version pinning, sentinel monitoring, and contractual notification requirements.
Level 4: Document the residual risk. Where compensating controls cannot fully substitute for the missing information, the AI System Assessor records the residual risk in the risk register (AISDP Module 6), assesses whether the residual risk falls within the acceptability threshold, and escalates to the AI Governance Lead for a risk acceptance decision. A pattern of GPAI provider non-disclosure may itself influence the model selection decision: an alternative provider offering greater transparency reduces the downstream provider's compliance risk. Risk Classification Methodology
Output distribution analysis requires the Technical SME to analyse the GPAI model's output distribution across demographic subgroups within the downstream system's operational context. This analysis uses the downstream system's own operational data, not the GPAI provider's training data. The analysis tests whether the GPAI model produces systematically different outputs for different subgroups when presented with equivalent inputs. Results are documented in AISDP Module 4 and cross-referenced to the fairness evaluation in Module 5.
Behavioural unpredictability arises because foundation models exhibit emergent behaviours that were not explicitly designed, including reasoning capabilities, tool-use abilities, and failure modes that appear at scale and were not present in smaller models. The downstream provider cannot predict or document these behaviours from first principles. The Technical SME conducts a systematic behavioural characterisation covering output determinism (given the same input, does the model produce the same output, and at what temperature settings), sensitivity to prompt phrasing (does rephrasing the same question produce materially different outputs), refusal behaviour (under what conditions does the model refuse to produce output, and does this refusal pattern affect any demographic subgroup disproportionately), hallucination rate (what proportion of outputs contain factual errors, and does the error rate vary by domain or input type), and boundary behaviour (how does the model behave when inputs fall outside the domain covered by the downstream system's intended purpose). The characterisation is documented in AISDP Module 3 and updated whenever the GPAI model version changes. The AI System Assessor uses the characterisation to inform the risk assessment in Module 6.