The EU body established under Article 57 to oversee GPAI compliance, develop codes of practice, and support the implementation of the AI Act.
AI literacy
The requirement under Article 4 that staff involved in the operation and use of AI systems have a sufficient level of AI literacy. In force since 2 February 2025 across all risk tiers.
AISDP
AI System Documentation Package. The central compliance artefact required by Article 11, comprising twelve modules that document a high-risk AI system's design, risk profile, testing, governance, and post-market monitoring.
Also known as: AI System Documentation PackageTechnical Documentation
AISDP module
One of the twelve numbered sections of the AI System Documentation Package (see the table in Section 1.3). Modules are the structural units of the AISDP itself; sections of this document describe how to produce the content for those modules.
AUC-ROC
A performance metric for binary classification models measuring the model's ability to distinguish between positive and negative classes. Values range from 0.5 (random) to 1.0 (perfect).
Action space policy
In agentic AI systems, the governance framework that constrains which actions an AI agent may autonomously execute versus which require human authorisation. Defines the boundaries of autonomous operation.
Adversarial testing
The systematic testing of AI systems against adversarial inputs designed to cause failures, including prompt injection, data poisoning, and evasion attacks. Required for systemic risk GPAI models under Article 55.
Also known as: red-teaming
Affected persons
Natural persons or groups of persons who are subject to or otherwise affected by an AI system. Their rights are protected through transparency obligations under Articles 13, 50, and 86.
Agentic AI
AI systems that operate with a degree of autonomy, taking sequences of actions in pursuit of goals with limited direct human instruction. Raises specific compliance challenges for human oversight and action space governance.
Annex I products
Products covered by EU harmonisation legislation listed in Annex I of the AI Act. AI systems that are safety components of these products follow a different conformity assessment pathway with an extended deadline of 2 August 2027.
Article 6(3) exception
The exception that allows certain Annex III AI systems to be reclassified as non-high-risk when both a functional criterion and a risk criterion are satisfied. Both criteria must be met; satisfying one alone is insufficient.
Classification Rules for High-Risk AI
Authorised representative
A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or GPAI model to carry out specified obligations under the Regulation.
B
Break-glass
An emergency procedure that enables authorised personnel to halt AI system processing immediately. Required by Article 14(4)(e). The break-glass architecture is described in Section 13.3, with compensating controls for both provider and deployer contexts.
Human Oversight
C
CE marking
The conformity marking affixed to high-risk AI systems under Article 48 to indicate conformity with the AI Act requirements.
CI/CD
The automation infrastructure that builds, tests, and deploys software. Sections 7 and 7A describe the compliance-grade CI/CD pipeline for AI systems. Section 7.0A provides a complete reference implementation.
Also known as: continuous integration, continuous deployment
Canary deployment
A progressive deployment strategy where a new system version receives a small percentage of production traffic (typically 1 to 5 percent) while the existing version handles the remainder. Automated analysis compares the canary's metrics against the existing version before proceeding to full rollout. Section 7.9 describes the compliance-grade implementation. Throughout the document, Article references without further qualification refer to the EU AI Act (Regulation (EU) 2024/1689). References to other legislation are given in full on first use in each section and abbreviated thereafter.
Classification Decision Record
The precursor document to the AISDP that records the system's classification within the AI Act's risk tiers and provides the rationale for that determination.
Also known as: CDR
D
DPIA
Data Protection Impact Assessment required under GDPR Article 35 for processing likely to result in high risk to the rights and freedoms of natural persons. Must be coordinated with the FRIA required by AI Act Article 27.
Also known as: Data Protection Impact Assessment
Data drift
A change in the statistical properties of the input data over time relative to the data the model was trained on. Detected through metrics such as PSI and requiring investigation when thresholds are exceeded.
Declaration of Conformity
The formal legal statement required by Article 47 declaring that a high-risk AI system conforms to the requirements of the Regulation. Signed by the provider and carrying material legal consequences.
Also known as: DoC, EU Declaration of Conformity
Deployer
A natural or legal person that uses an AI system under its authority, except where the system is used in a personal, non-professional activity. Many organisations are both provider and deployer of their own AI systems, which triggers dual obligations (notably the FRIA requirement under Article 27).
Also known as: AI deployer, user of AI systemClassification Rules for High-Risk AI
E
EU database
The EU database for high-risk AI systems established under Article 71, where providers and deployers must register their systems before placement on the market or putting into service.
Evidence pack
The collection of all supporting artefacts that substantiate the claims made in the AISDP. Every material assertion must trace to a specific artefact in the evidence pack.
Explainability
The degree to which the internal mechanics of an AI system can be presented in understandable terms to a human. Required for transparency under Article 13 and human oversight under Article 14.
F
FMEA
A structured risk identification methodology that systematically examines each system component for potential failure modes, their effects, and their detectability. One of the five risk identification methods described in Section 2.4.
Also known as: failure mode and effects analysisRisk Management System
FRIA
Fundamental Rights Impact Assessment. Required by Article 27 for deployers of high-risk AI systems before the system is put into use. Assesses the impact on fundamental rights of affected persons.
Also known as: Fundamental Rights Impact Assessment
Fairness evaluation
The assessment of whether an AI system produces equitable outcomes across demographic subgroups, using metrics such as the Selection Rate Ratio and statistical parity.
G
GPAI
An AI model that displays significant generality, is capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. Defined in Article 3(63). Foundation models from OpenAI, Anthropic, Google, Mistral, and Meta are GPAI models. Section 17 provides the complete GPAI integration compliance architecture.
Also known as: GPAI model, general-purpose AI, foundation modelClassification Rules for High-Risk AI
Governance gate
An automated checkpoint in the CI/CD pipeline that evaluates compliance criteria before allowing deployment to proceed. The five governance gates cover classification, risk, fairness, documentation currency, and deployment authorisation.
Grounding verification
The process of checking whether an LLM's outputs are supported by the retrieved context in a RAG system. Functions as a compliance control for accuracy (Article 15) and transparency (Article 13). Section 18.4 describes three implementation approaches.
Accuracy, Robustness, Cybersecurity
H
Harmonised standards
European standards adopted by European Standardisation Organisations at the request of the Commission. Under Article 34, compliance with harmonised standards gives a presumption of conformity with AI Act requirements.
High-risk AI system
An AI system falling within one of the categories listed in Annex III, or constituting a safety component of a product covered by Annex I harmonisation legislation, and not qualifying for the Article 6(3) exception.
Also known as: high-risk AI, HRAISHigh-Risk AI System Areas
I
IaC
The practice of defining deployment infrastructure through version-controlled configuration files rather than manual processes. Section 7.10 addresses IaC for AI systems.
Also known as: infrastructure as code
Importer
A natural or legal person located or established in the Union that places on the market an AI system bearing the name or trademark of a natural or legal person established in a third country.
Also known as: AI system importerClassification Rules for High-Risk AI
Instructions for Use
The documentation required by Article 13 that accompanies a high-risk AI system, defining its intended purpose, known limitations, human oversight requirements, and monitoring guidance.
Also known as: IFUTransparency and Information
Intended purpose
L
LLM
A type of GPAI model trained on large text corpora to generate, classify, summarise, or transform natural language. LLMs are the most common GPAI model type integrated into high-risk systems. This document uses "LLM" when the guidance is specific to language models and "GPAI model" when it applies to all general-purpose models.
Also known as: large language model
M
MLOps
The practices and tooling for deploying, monitoring, and maintaining ML models in production. The compliance-grade MLOps pipeline is described in Sections 7 and 7A.
Also known as: machine learning operations
Market surveillance
The activities carried out by national competent authorities under Article 65 to ensure that AI systems on the market comply with the Regulation.
Master Compliance Questionnaire
The systematic verification framework that walks through every applicable requirement of the AI Act, prompting the assessor to confirm compliance and reference supporting evidence.
Also known as: MCQ
Model card
A documentation artefact that provides structured information about a model's intended use, performance characteristics, limitations, and evaluation results.
Model registry
A centralised repository that stores model artefacts, metadata, performance metrics, and lineage information. Provides version control and traceability for all model components in the system.
Also known as: ML model registry, model catalogue
Multi-model system
An AI system architecture that integrates multiple AI models working together, potentially from different providers. Creates compound compliance obligations and cascading risk considerations.
N
NER
An NLP task that identifies and classifies named entities (persons, organisations, locations, clinical terms) in text. Used in the MediAssist AI worked example (Section 15.7) for clinical entity extraction.
NLI
A task that determines whether a hypothesis is entailed by, contradicts, or is neutral with respect to a premise. Used in Section 18.4 for grounding verification in RAG systems.
National competent authority
The authority designated by each Member State under Article 70 to supervise the application and implementation of the AI Act. Maturity varies considerably across the EU.
Also known as: NCA
Notified body
An independent conformity assessment body designated by a Member State under Article 28. Required for third-party assessment of biometric identification systems under Annex III, point 1.
Also known as: conformity assessment body, NB
O
OPA
An open-source policy engine that evaluates declarative policies written in the Rego language. Used throughout for policy-as-code enforcement in the governance pipeline (Section 7A).
Also known as: open policy agent, Rego
P
PSI
A statistical measure of the difference between two distributions. Used throughout this document to detect data drift and output distribution shift. PSI below 0.10 is typically stable; 0.10 to 0.20 warrants investigation; above 0.20 indicates significant shift.
Also known as: population stability index
Placing on the market
The first making available of an AI system on the Union market. This is the event that starts the ten-year documentation retention clock.
Post-market monitoring
The systematic process required by Article 72 for providers to continuously monitor the performance, safety, and compliance of high-risk AI systems after placement on the market.
Also known as: PMM
Prompt governance
The versioning, testing, and change management of prompts used in LLM-based AI systems. Prompts are treated as compliance-relevant artefacts requiring the same rigour as code and model artefacts.
Provider
A natural or legal person that develops an AI system or has an AI system developed and places it on the market or puts it into service under its own name or trademark. In this document, the provider is the organisation preparing the AISDP.
Q
Quality Management System
Required by Article 17. The organisational processes and procedures for AI compliance that provide the general framework within which specific AISDPs operate.
Also known as: QMS
R
RAG
An architecture that combines a GPAI model with a knowledge base, retrieving relevant documents at inference time and providing them as context to the model. Section 18 addresses RAG-specific compliance.
Also known as: retrieval-augmented generation
RPN
The product of severity, occurrence, and detectability scores in an FMEA. Used to prioritise failure modes for risk register inclusion. Higher RPNs indicate greater risk priority.
Also known as: risk priority number
Real-world testing
Testing of high-risk AI systems in real-world conditions outside a laboratory environment, governed by Article 60 and requiring registration under Annex IX.
Regulatory sandbox
A controlled environment established by a competent authority under Article 57 that allows providers to develop, train, validate, and test innovative AI systems under regulatory oversight.
S
SBOM
A structured inventory of all software components, libraries, and dependencies in a system. Required for supply chain security (Section 8.7) and vulnerability management.
Also known as: software bill of materialsAccuracy, Robustness, Cybersecurity
SHAP
A model explanation method based on game-theoretic Shapley values. Provides per-feature contribution scores for individual predictions. Used in Sections 5, 6, and 15 for explainability and feature importance monitoring.
Also known as: Shapley values
SRR
The ratio of the selection rate for a protected characteristic subgroup to the selection rate for the reference group. Values below 0.80 (the four-fifths rule) indicate potential adverse impact. Used throughout for fairness evaluation.
Also known as: selection rate ratio, four-fifths rule
Safety component
A component of a product or system which fulfils a safety function for that product or system. AI systems that are safety components of Annex I products are classified as high-risk under Article 6(1).
Classification Rules for High-Risk AI
Codes of practice
Voluntary codes developed under Article 56 providing a compliance pathway for GPAI providers. The AI Office Code of Practice was finalised in early 2025.
Also known as: Code of Practice
Compensating controls
Throughout this document, compensating controls refer to technical and organisational measures that achieve the regulatory objective through alternative means when the primary recommended approach is unavailable, impractical, or disproportionate. Each compensating controls section explains the challenge, the recommended tooling or approach, and a procedural alternative for organisations that cannot implement the tooling.
Composite version
A structured identifier that captures the specific version of every artefact type (code, data, model, configuration, prompts, knowledge base) deployed at a given point in time. The composite version is the unit of compliance: the AISDP describes a specific composite version, and the Declaration of Conformity attests to it. Section 6.0A provides the full treatment.
Composite version identifier
A structured identifier capturing the specific version of every artefact type (code, data, model, configuration, prompts, knowledge base) deployed at a given point in time. The unit of compliance against which the AISDP and Declaration of Conformity are assessed.
Conformity assessment
The process of verifying whether the requirements of the Regulation relating to an AI system have been fulfilled. For most high-risk systems, this is an internal assessment (Annex VI); for biometric identification systems under Annex III, point 1, third-party assessment by a notified body is required.
Also known as: CATechnical Documentation Contents
Corrective actions
Measures required under Article 20 when a provider becomes aware that a high-risk AI system is not in conformity with the Act, including withdrawal, recall, or modification.
DevSecOps
Development, Security, and Operations integrated practice. The approach to embedding security controls throughout the CI/CD pipeline rather than treating security as a separate phase.
The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional materials, technical documentation, and Declaration of Conformity.
Classification Rules for High-Risk AI
Also known as: AI provider, AI system providerClassification Rules for High-Risk AI
Provider boundary
The legal determination under Article 25 of when an organisation's modifications to a GPAI model make it the 'provider' of the resulting AI system, triggering full provider obligations including AISDP preparation.
Putting into service
The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
Residual risk
Risk remaining after all reasonably practicable risk management measures have been applied. Article 9(4) requires risks to be eliminated or reduced 'as far as possible', making residual risk documentation critical.
Risk Management System
Risk register
The central record of identified risks, their severity scores, likelihood assessments, existing controls, and residual risk levels. Updated continuously as part of the Article 9 risk management system.
Risk Management System
Sentinel dataset
A fixed, curated evaluation dataset maintained alongside the system for continuous behavioural monitoring. The sentinel dataset exercises the fairness, accuracy, and safety dimensions that the AISDP documents. Used in Sections 12 (PMM) and 17 (GPAI integration) for drift detection and GPAI model monitoring.
Serious incident
An incident or malfunctioning of an AI system that directly or indirectly leads to death, serious damage to health or property, or large-scale disruption of critical infrastructure. Must be reported under Article 73.
Substantial modification
A change to the AI system after its placing on the market or putting into service which is not foreseen or planned by the provider and as a result of which the compliance of the system with the requirements of the Regulation is affected or the intended purpose for which the system has been assessed is modified. Section 6.9 provides quantitative thresholds and a decision process for identifying substantial modifications.
Also known as: significant changeClassification Rules for High-Risk AI
Systemic risk model
A GPAI model classified under Article 51 as having capabilities that match or exceed the most advanced models, posing systemic risks. Subject to additional obligations including adversarial testing and serious incident reporting.