We use cookies to improve your experience and analyse site traffic.
Article 25(1)(b) treats any organisation that fine-tunes a GPAI model for a high-risk use case as the provider of a high-risk AI system. This page explains when provider status is triggered, what systemic risk classification means for downstream providers, and how to document GPAI integration across the AISDP.
Any organisation that fine-tunes a GPAI model for a high-risk use case becomes the provider of a high-risk AI system under Article 25(1)(b).
Any organisation that fine-tunes a GPAI model for a high-risk use case becomes the provider of a high-risk AI system under Article 25(1)(b). The regulation treats fine-tuning as a substantial modification because it changes the model's intended purpose from general-purpose to a specific high-risk application. The compliance consequence is that the fine-tuning organisation must prepare a full aisdp, conduct a conformity assessment, sign a Declaration of Conformity, and register in the EU database. The GPAI provider's Article 53 compliance does not substitute for any of these obligations.
This provider boundary applies regardless of the fine-tuning technique used. Full fine-tuning (adjusting all model parameters on domain-specific data), parameter-efficient methods such as LoRA, QLoRA, and prefix tuning, reinforcement learning from human feedback applied to a pre-trained model, and distillation from a larger GPAI model into a smaller domain-specific model all constitute fine-tuning for the purposes of the provider boundary analysis. The distinction matters: in-context learning, where examples are provided in the prompt without modifying model parameters, does not constitute fine-tuning, although it may still constitute a substantial modification if it changes the system's intended purpose. Organisations integrating gpai model components should consult GPAI Model Integration for High-Risk Systems for the broader integration framework.
A system that uses a GPAI model via API with a carefully crafted system prompt, retrieval-augmented generation, and output post-processing, without modifying the model's parameters, occupies a grey zone.
A system that uses a GPAI model via API with a carefully crafted system prompt, retrieval-augmented generation, and output post-processing, without modifying the model's parameters, occupies a grey zone. The organisation has not fine-tuned the model, but it has built a high-risk system around it. Article 25(1)(a) applies if the organisation places the system on the market under its own name. Article 25(1)(c) applies if the system's intended purpose differs from the GPAI model's general purpose.
In practice, most high-risk applications of GPAI models trigger provider status through one of these pathways, regardless of whether fine-tuning occurs. The Legal and Regulatory Advisor documents the provider status analysis for each GPAI-based system, including the specific Article 25 pathway, the reasoning, and the conclusion. This analysis is retained in AISDP Module 1 and reviewed whenever the system's architecture, the GPAI model version, or the deployment context changes. For organisations deploying systems where multiple roles share responsibility, Deployer Obligations and Compliance Requirements covers the deployer side of this boundary.
Where the integrated GPAI model has been classified as a model with systemic risk under Article 51, the downstream provider faces additional considerations.
Where the integrated GPAI model has been classified as a model with systemic risk under Article 51, the downstream provider faces additional considerations. The downstream system inherits the systemic risk characteristics of the underlying model, and the risk assessment in AISDP Module 6 must explicitly address whether the deployment context amplifies or constrains that risk. A GPAI model with systemic risk deployed in a closed, narrowly scoped recruitment screening system presents a different risk profile from the same model deployed in an open-ended customer interaction system.
Article 55 requires systemic risk model providers to conduct model evaluations, assess and mitigate systemic risks, report serious incidents, and ensure cybersecurity. The downstream provider should request access to the evaluation results and risk assessments that the GPAI provider is required to produce. Where these are available, they provide valuable input to the downstream provider's own risk assessment. Where they are unavailable, compensating controls apply with heightened intensity, as described in GPAI Provider Due Diligence and Disclosure Registers.
Article 55(1)(c) requires systemic risk model providers to report serious incidents to the AI Office.
Article 55(1)(c) requires systemic risk model providers to report serious incidents to the AI Office. The downstream provider's own Article 73 reporting obligation operates in parallel. A serious incident involving a high-risk system built on a systemic risk model may trigger simultaneous reporting by the downstream provider to the national market surveillance authority and by the GPAI provider to the AI Office. The Legal and Regulatory Advisor coordinates to ensure that the two reports are consistent.
Each AISDP module requires specific content addressing the GPAI model component.
Each AISDP module requires specific content addressing the GPAI model component. Module 1 (System Description) must identify the GPAI model, its provider, the API version or model identifier in use, the provider's EU registration status, and the intended purpose mapping from general-purpose to high-risk application. Module 2 (Development Process) covers the model selection rationale, fine-tuning methodology where applicable, the prompt engineering approach, and the integration architecture.
Module 3 (Architecture) documents the GPAI model's position in the system architecture, the data flows between the downstream system and the GPAI model API, the output validation and filtering layers, and the fallback mechanisms. Module 4 (Data Governance) records the GPAI disclosure register's training data entries, behavioural proxy testing results, output distribution analysis, and the Article 10 gap statement. Module 6 (Risk Management) captures the inherited risk analysis, the GPAI provider due diligence record, residual risk from information gaps, and the contractual risk transfer analysis.
Module 8 (Transparency) describes how the GPAI model's involvement is disclosed to deployers and affected persons, the explanation methodology for GPAI-generated outputs, and the limitations of post-hoc explanations for foundation model reasoning. Module 9 (Cybersecurity) covers the prompt injection defence architecture, the GPAI-specific threat model, red-teaming results for GPAI attack surfaces, and security opacity compensating controls. Module 10 (Version Control) addresses the GPAI model version pinning policy, sentinel monitoring configuration, and change management process for model version upgrades. Module 12 (Post-Market Monitoring) records sentinel monitoring results, GPAI model behavioural drift records, the GPAI provider communication log, and the GPAI disclosure register update history. For the broader post-market monitoring framework, see .
The central compensating control for GPAI integration is the GPAI abstraction layer: a software component that sits between the downstream system and the GPAI model API, providing a controlled interface that enforces compliance constraints regardless of the GPAI model's behaviour.
The central compensating control for GPAI integration is the GPAI abstraction layer: a software component that sits between the downstream system and the GPAI model API, providing a controlled interface that enforces compliance constraints regardless of the GPAI model's behaviour. This layer is implemented as a dedicated service in the system architecture, documented in AISDP Module 3 as a compliance-critical component. The Technical SME maintains the layer alongside the system code, and changes follow the governance pipeline.
The abstraction layer implements five functions. Input governance validates and logs every request sent to the GPAI model, enforcing input constraints documented in the AISDP. Output validation checks every response against the expected output schema, content safety rules, and domain-specific constraints before the response reaches the downstream system's processing pipeline. Sentinel injection periodically inserts sentinel evaluation requests into the production traffic stream, compares the model's responses against expected baselines, and alerts on deviation. Version tracking logs the exact model version or checkpoint used for each inference, enabling retrospective analysis when the GPAI provider changes the model. Cost and rate monitoring tracks API usage against budgeted limits, preventing runaway costs from recursive agentic loops or adversarial abuse.
Without the GPAI abstraction layer, the compliance functions must be implemented within the application code and verified manually.
Without the GPAI abstraction layer, the compliance functions must be implemented within the application code and verified manually. Input validation is coded into the application's request handler, and output validation is coded into the response processing logic. Sentinel evaluation is run as a scheduled batch job rather than inline with production traffic. Version tracking relies on logging the API response headers, as most GPAI providers include a model identifier in the response. Cost monitoring uses the GPAI provider's billing dashboard rather than real-time application-level tracking.
This approach is feasible for single-system deployments with moderate traffic volumes. At scale, the abstraction layer becomes necessary for operational reliability and auditability.
Yes. All parameter-efficient fine-tuning methods including LoRA, QLoRA, and prefix tuning constitute fine-tuning for the provider boundary analysis and trigger provider status under Article 25(1)(b).
No. The GPAI provider's Article 53 compliance does not substitute for any high-risk system obligations. The fine-tuning organisation must independently prepare a full AISDP, conduct a conformity assessment, and register in the EU database.
Not strictly mandatory. A procedural alternative exists where compliance functions are coded into the application and verified manually, but this approach is only feasible for single-system deployments with moderate traffic. At scale, the abstraction layer becomes necessary for operational reliability and auditability.
Each AISDP module requires specific GPAI content, from model identification and integration architecture through to prompt injection defences, version pinning, and post-market monitoring of behavioural drift.
A dedicated software component between the downstream system and GPAI model API that enforces compliance through input governance, output validation, sentinel injection, version tracking, and cost monitoring.