We use cookies to improve your experience and analyse site traffic.
The EU AI Act entered into force on 1 August 2024, with obligations phased across a staggered timeline running to August 2027. Prohibited practices and AI literacy requirements took effect in February 2025. GPAI obligations apply from August 2025. The full high-risk framework, requiring conformity assessment, Declaration of Conformity, CE marking, and EU database registration, becomes enforceable on 2 August 2026.
The EU AI Act entered into force on 1 August 2024, launching a staggered enforcement timeline that phases obligations across three years.
The EU AI Act entered into force on 1 August 2024, launching a staggered enforcement timeline that phases obligations across three years. Understanding which provisions are already enforceable and which are pending is essential for organisations planning their aisdp preparation.
Prohibited practices under Article 5 took effect on 2 February 2025, banning subliminal manipulation, exploitation of vulnerable groups, social scoring, untargeted facial recognition scraping, emotion recognition in workplaces and educational institutions, criminal risk profiling, and real-time remote biometric identification in publicly accessible spaces. Organisations operating systems falling within any of these categories face immediate enforcement risk. The AISDP process cannot remediate a prohibited system; the only compliant action is cessation.
AI literacy under Article 4 also became enforceable on 2 February 2025. Providers and deployers must ensure that staff involved in AI system operation have a sufficient level of AI literacy. This obligation applies across all risk tiers and predates the high-risk framework. The AI Office Code of Practice for GPAI, finalised in early 2025 under Article 56, provides a compliance pathway for GPAI providers.
GPAI obligations under Articles 51 through 56 took effect on 2 August 2025.
GPAI obligations under Articles 51 through 56 took effect on 2 August 2025. Providers of general-purpose AI models must comply with transparency obligations, copyright policy requirements, and for models with systemic risk additional safety obligations including adversarial testing and serious incident reporting. Organisations integrating GPAI models into high-risk systems must account for the GPAI provider's compliance status in their own risk assessment and model selection rationale.
The penalty framework for prohibited AI practices also became enforceable on 2 August 2025, with fines of up to EUR 35 million or 7 per cent of global annual turnover for violations. Member states were required to designate their national competent authorities by this date, though as of early 2026 the operational readiness of these authorities varies considerably across the EU.
The full high-risk framework becomes enforceable on 2 August 2026.
The full high-risk framework becomes enforceable on 2 August 2026. This is the critical deadline for AISDP preparation. From this date, every high-risk AI system placed on the market or put into service must have a completed conformity assessment, a signed Declaration of Conformity, CE marking affixed where applicable, and registration in the EU database.
Article 8 is the umbrella provision binding these obligations together. It requires high-risk AI systems to comply with all Chapter 2 requirements, taking into account the intended purpose and the state of the art. Compliance with any individual article cannot be assessed in isolation; the conformity assessment evaluates the system against the full set of requirements as an integrated whole. The AISDP's twelve-module structure reflects this integrative mandate.
The designation of notified bodies under Article 28 is proceeding gradually, with only a small number formally designated so far.
AI systems constituting safety components of products covered by Annex I harmonisation legislation, including machinery, medical devices, lifts, toys, and radio equipment, benefit from an extended deadline of 2 August 2027.
AI systems constituting safety components of products covered by Annex I harmonisation legislation, including machinery, medical devices, lifts, toys, and radio equipment, benefit from an extended deadline of 2 August 2027. These products already have conformity assessment procedures under their respective directives; the AI Act requirements layer on top.
The European Commission's Digital Omnibus proposal, currently in legislative process, may adjust certain timelines and requirements. Organisations should monitor its progress but should not rely on prospective changes as a reason to defer preparation.
The AI Act does not operate in isolation.
The AI Act does not operate in isolation. The GDPR governs personal data used to train, validate, test, and operate AI systems, with the AISDP's data governance module demonstrating GDPR compliance alongside AI Act compliance. NIS2 requires organisations that are essential or important entities to integrate AI system cybersecurity controls with their broader risk management measures.
The Cyber Resilience Act applies to AI systems delivered as software products, with Article 15(5) providing a deemed compliance mechanism for overlapping cybersecurity requirements. DORA creates additional obligations for financial sector organisations deploying high-risk AI systems. The Product Safety Regulation applies to AI embedded in consumer products.
Organisations subject to multiple frameworks should establish a unified compliance mapping identifying where a single control or evidence artefact satisfies overlapping requirements, reducing duplication and ensuring consistency.
Organisations that begin systematic AISDP preparation now will reach the August 2026 deadline with a compliance posture that can withstand regulatory scrutiny.
Organisations that begin systematic AISDP preparation now will reach the August 2026 deadline with a compliance posture that can withstand regulatory scrutiny. The preparation timeline for a medium-complexity system spans approximately twenty to twenty-eight weeks using the seven-phase delivery process.
Organisations that delay will find that this timeline leaves insufficient margin for the remediation cycles that first-time assessments invariably require. The August 2026 deadline is approaching, and the regulatory landscape itself remains in flux as national competent authorities develop their operational procedures and harmonised standards are finalised. Engaging with relevant authorities early, even where those authorities are still establishing their procedures, is advisable.
No. The Digital Omnibus proposal may adjust certain timelines, but organisations should not rely on prospective legislative changes as a reason to defer preparation. Regulatory uncertainty is a reason to prepare earlier, not later.
Conduct an immediate inventory of all AI systems and assess each against the eight Article 5 prohibited categories. Any system that falls within a prohibited category must be decommissioned immediately, as the AISDP process cannot remediate a prohibited system.
A medium-complexity system requires 20 to 28 weeks of preparation. Organisations with multiple high-risk systems need proportionally more lead time and should begin with the highest-risk systems first.
Article 99 requires penalties to be proportionate to the organisation's size, economic viability, and the nature and gravity of the infringement. This reduces the quantum but does not eliminate enforcement risk.
Organisations should engage with relevant authorities early, even where those authorities are still establishing procedures. The compliance deadline is not contingent on authority readiness.
2 August 2026. From that date, every high-risk system needs a completed conformity assessment, Declaration of Conformity, CE marking, and EU database registration.
2 August 2025 for transparency, copyright, and safety obligations under Articles 51 through 56.
2 August 2027 for AI systems that are safety components of products covered by existing EU harmonisation legislation.
The timelines overlap, creating coordinated compliance requirements for data governance, cybersecurity, incident reporting, and third-party risk management.