We use cookies to improve your experience and analyse site traffic.
The European AI Office coordinates AI Act implementation across the EU, develops binding guidance, and enforces GPAI model obligations. The national competent authority landscape remains fragmented, with fourteen member states having designated no authority as of late 2025. Regulatory sandboxes under Article 57 offer direct supervisory feedback before conformity assessment, reducing the risk of assessment failure.
The European AI Office, established within the European Commission, serves as the central coordination point for AI Act implementation across the EU.
The European AI Office, established within the European Commission, serves as the central coordination point for AI Act implementation across the EU. The Office coordinates the consistent application of the Act across member states, develops guidelines, templates, and codes of practice, oversees GPAI model compliance and enforcement, manages the scientific panel of independent experts, administers the EU database, and monitors implementation progress. For providers of high-risk systems, the AI Office's most immediately relevant outputs are the implementing acts, delegated acts, and guidance documents that clarify ambiguous requirements.
The AI Office has direct enforcement powers over GPAI model providers under Article 88. Providers of GPAI models with systemic risk interact directly with the AI Office, bypassing national competent authorities. For organisations whose high-risk systems incorporate GPAI models, this creates a dual regulatory relationship: the national competent authority oversees the high-risk system, and the AI Office oversees the underlying GPAI model.
Organisations should monitor the AI Office's publications systematically. The Office publishes guidelines, codes of practice, and implementing guidance that refine the Act's requirements. These documents are non-binding in themselves, though they carry significant interpretive weight: a market surveillance authority assessing compliance will reference the AI Office's guidance. Departing from published guidance without a documented justification is a compliance risk. Organisations can also participate in the AI Office's consultation processes, contributing to draft guidelines and codes of practice to shape the framework and anticipate forthcoming requirements.
The national competent authority landscape across the EU is fragmented, with member states at varying stages of operational readiness as of early 2026.
The national competent authority landscape across the EU is fragmented, with member states at varying stages of operational readiness as of early 2026. Organisations deploying across multiple jurisdictions must navigate this fragmentation carefully.
Among the frontrunner member states, Spain established the Agencia Espanola de Supervision de la Inteligencia Artificial via Royal Decree 729/2023, one of the first dedicated AI supervisory authorities in the EU, with a launched national regulatory sandbox and published procedures. Ireland designated 15 sector-specific competent authorities covering financial, regulatory, consumer, health, utility, and telecoms sectors alongside 9 fundamental rights authorities, requiring organisations to identify the correct sector-specific authority for their system type. The Netherlands is coordinating under the Ministry of Economic Affairs with the consumer and data protection authorities expected as central authorities.
Among progressing member states, Germany designated the Bundesnetzagentur as primary market surveillance authority via draft legislation, with BaFin covering financial sector AI systems, though early elections disrupted the legislative calendar. France designated the DGCCRF for consumer protection, CNIL for data protection, and the Defender of Rights for fundamental rights.
As of late 2025, fourteen member states had not yet designated any competent authority. Hungary and Italy had not identified their fundamental rights protection authorities despite the November 2024 deadline. Twenty-one member states had not submitted national implementation legislation by the August 2025 deadline. The prudent approach is to monitor the IAPP EU AI Act Regulatory Directory and the Future of Life Institute's national implementation tracker, to establish contact with designated authorities early, and to prepare documentation adaptable to varying national procedures.
Article 57 requires each member state to establish at least one AI regulatory sandbox by August 2026.
Article 57 requires each member state to establish at least one AI regulatory sandbox by August 2026. Sandboxes provide a controlled environment in which providers can develop, train, validate, and test AI systems under regulatory supervision before placing them on the market. Participation is voluntary but offers significant advantages.
Sandbox participation provides direct regulatory feedback on the system's compliance approach before the conformity assessment, reducing the risk of a failed assessment or post-market enforcement action. It creates a documented track record of regulatory cooperation that strengthens credibility with market surveillance authorities. Article 57(8) permits the competent authority to agree on conditions that facilitate innovation. For systems in novel domains where the application of the Act's requirements is unclear, sandbox participation can help establish precedent.
Participation requires dedicated effort. The organisation must prepare an application demonstrating the system's intended purpose, risk profile, and development stage. Regular reporting to the supervising authority is expected, including progress updates, test results, and issues encountered. Programmes typically run for six to twelve months. Organisations should consider sandbox participation for their highest-risk or most novel systems where regulatory uncertainty is greatest. Lower-risk systems with well-understood compliance pathways are better served by standard internal assessment.
The AI System Assessor integrates sandbox findings and supervisory feedback into the . Where the competent authority has reviewed and accepted specific aspects of the system's design, the Legal and Regulatory Advisor documents the acceptance as supporting evidence. Sandbox exit reports, where the supervising authority provides a formal summary of outcomes, are valuable evidence artefacts for the AISDP.
The guidance is non-binding, but carries significant interpretive weight. Market surveillance authorities reference it when assessing compliance, so departing without documented justification creates risk.
Six to twelve months, requiring dedicated effort including applications, regular reporting, and progress updates to the supervising authority.
Monitor tracking directories, prepare documentation in adaptable formats, and follow the most demanding interpretation of requirements until guidance is published.
Monitor publications systematically, follow published guidance unless departure is documented, and participate in public consultations on draft guidelines.