We use cookies to improve your experience and analyse site traffic.
Technical guidance for EU AI Act compliance across 22 domains. Written for engineers, compliance leads, and CTOs preparing for the August 2026 deadline.
The EU AI Act (Regulation (EU) 2024/1689) establishes the world's first comprehensive regulatory framework for artificial intelligence, with full enforcement for high-risk systems beginning 2 August 2026. Every high-risk AI system placed on the EU market from that date must carry a completed conformity assessment, a signed Declaration of Conformity, CE marking, and registration in the EU database. The evidentiary backbone for all of these requirements is the AI System Documentation Package.
Article 9 of the EU AI Act requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system that operates as a continuous, iterative process throughout the entire system lifecycle. This pillar page covers the four-tier classification framework, the Fundamental Rights Impact Assessment, five complementary risk identification methods, scoring and calibration, residual risk acceptability, and the iterative governance obligations that form the backbone of EU AI Act compliance.
Model selection under the EU AI Act is a compliance decision as much as an engineering one. The choice of model architecture directly determines the organisation's ability to satisfy requirements for transparency under Article 13, explainability supporting Article 14 human oversight, accuracy and robustness under Article 15, and data governance under Article 10. This pillar page covers model origin risk, copyright exposure, geopolitical considerations, the fine-tuning provider boundary, and the Model Selection Record.
Article 10 of the EU AI Act imposes detailed data governance requirements on training, validation, and testing datasets for high-risk AI systems. These requirements cover dataset documentation, completeness assessment, fairness and bias evaluation, data lineage, special category data processing under Article 10(5), GDPR alignment, third-party data governance, and embedding model and knowledge base governance. This page translates each obligation into concrete engineering practices documented in AISDP Module 4.
Annex IV requires detailed documentation of the system's architecture, design, and computational infrastructure. This section structures that documentation around an eight-layer reference architecture spanning data ingestion through monitoring, with per-layer controls against intent and outcome drift.
Traceability underpins the entire AISDP. Article 12 requires automatic recording, Article 3(23) defines substantial modification, and Article 72 requires change tracking. This section covers version control governance across six artefact types: code, data, models, configuration, prompts, and knowledge bases.
The CI/CD pipeline produces compliance evidence as a byproduct of development. This section covers pipeline orchestration, model validation gates, experiment tracking, automated documentation generation, compliance-gated deployment, and pipeline monitoring for high-risk AI systems.
A governance CI/CD pipeline extends the standard ML pipeline to prove that every compliance obligation was satisfied at every stage. It adds five layers: policy enforcement, governance gates, evidence generation, AISDP synchronisation, and audit persistence.
Article 15 requires high-risk AI systems to be resilient against unauthorised exploitation of vulnerabilities. This section covers traditional cybersecurity foundations, AI-specific threats including the OWASP LLM Top 10, cross-regulatory mapping for NIS2, CRA, and DORA, and supply chain security.
Article 43(2) provides that for most high-risk AI systems classified under Annex III, conformity assessment is an internal procedure based on Annex VI. This section covers the assessment structure, documentation standards, assessor independence, organisational roles, notified body engagement, non-conformity management, and continuous assessment.
Articles 47 and 48 establish the formal outputs of conformity assessment: the Declaration of Conformity and the CE marking. Together with the harmonised standards framework under Article 40, these create enforceable obligations with personal and organisational liability consequences for providers of high-risk AI systems.
Articles 49 and 71 require providers of high-risk AI systems to register in the publicly accessible EU database before placing the system on the market or putting it into service. This section covers database registration, the national competent authority landscape, inspection readiness, regulatory sandbox participation, multi-jurisdiction deployment, the penalty structure under Article 99, and system withdrawal procedures.
Article 72 requires providers of high-risk AI systems to establish a post-market monitoring system that actively and systematically collects, documents, and analyses relevant data throughout the system's lifetime. This section covers the PMM plan, monitoring implementation, alerting, serious incident reporting under Article 73, and system decommissioning.
Articles 9, 12, 14, 72, and 73 of the EU AI Act collectively demand continuous, multi-layered operational oversight throughout a high-risk AI system's lifetime. This section structures that oversight as a six-level pyramid, from technical monitoring through to external regulatory bodies, covering break-glass procedures, escalation without reprisal, AI literacy, fatigue countermeasures, and corporate governance integration.
The EU AI Act requires high-risk AI systems to satisfy obligations under Articles 8 to 15 before deployment. This seven-phase delivery process sequences those obligations into a practical 20-to-28-week programme with defined roles, artefacts, governance gates, and guidance for agile teams and brownfield systems.
Article 25 requires downstream providers who build high-risk AI systems on general-purpose AI models to meet the full Chapter 2 compliance obligations, even for components they did not build. This section provides a complete compliance architecture for navigating the information asymmetry between GPAI providers and downstream system providers.
Retrieval-augmented generation introduces compliance challenges spanning data governance, version control, cybersecurity, and post-market monitoring that the base GPAI integration guidance does not fully address. The knowledge base is a compliance-critical data asset under Article 10 principles, and the retrieval pipeline is a decision-making component requiring dedicated documentation in the AISDP.
Agentic AI systems that plan, reason, and act autonomously break the single-inference model that the AISDP framework assumes. Articles 12, 14, and 9 require logging, human oversight, and risk management controls adapted for systems whose behaviour is not fully predictable.
Annex IV of the EU AI Act requires documentation of 'the elements of the AI system and of the process for its development.' For composite systems using multiple models, compliance properties emerge from component interactions and cannot be reduced to individual model evaluations.
Article 26 of the EU AI Act imposes eight distinct obligations on deployers of high-risk AI systems, each with penalty exposure reaching EUR 15 million or 3 per cent of global annual turnover. This handbook provides the operational reference for meeting those obligations.
This section provides the quantified cost model for EU AI Act compliance, covering staffing, tooling, infrastructure, external advisory, and ongoing maintenance across three organisational profiles. All figures reflect 2026 market rates for Western European jurisdictions.
Building a compliant AI system requires tooling across thirteen categories, from pipeline orchestration through to incident management. This reference maps every tool to its compliance function, licence model, and procedural alternative, enabling engineering teams to plan their stack and budget holders to assess commercial implications.