We use cookies to improve your experience and analyse site traffic.
Article 15 of the EU AI Act requires high-risk AI systems to resist unauthorised exploitation, but the cybersecurity obligation intersects with NIS2, the Cyber Resilience Act, and DORA. This page maps the overlapping requirements and shows how organisations can satisfy multiple regimes through integrated controls.
Article 15(4) and (5) of the EU AI Act require high-risk AI systems to be resilient against attempts by unauthorised third parties to alter their use, outputs, or performance by exploiting system vulnerabilities.
Article 15(4) and (5) of the EU AI Act require high-risk AI systems to be resilient against attempts by unauthorised third parties to alter their use, outputs, or performance by exploiting system vulnerabilities. This obligation covers adversarial attacks, data poisoning, model extraction, and other AI-specific threats alongside traditional cybersecurity requirements applicable to any software system. high risk ai system The AISDP Module 9 must document the cybersecurity measures in place, and a competent authority or notified body reviewing it will expect a threat model specific to the AI system, evidence of security testing, a description of the security architecture, an incident response plan, and evidence of ongoing monitoring.
The cybersecurity obligation does not exist in isolation. It intersects with the NIS2 Directive (Directive (EU) 2022/2555) for operators of essential and important entities, the Cyber Resilience Act (Regulation (EU) 2024/2847) for products with digital elements, and DORA (Regulation (EU) 2022/2554) for financial sector entities. Organisations subject to multiple cybersecurity regimes must map the overlapping requirements and ensure their AISDP cybersecurity documentation addresses the AI Act's specific expectations without duplicating effort across regulatory frameworks. Cybersecurity for High-Risk AI Systems
The following table summarises how each regime relates to the AI Act:
| Regime | Scope Overlap with AI Act | Incident Reporting Deadline | Deemed Compliance Pathway |
|---|---|---|---|
| AI Act (Art. 15) | AI system resilience, AI-specific threats | 2 / 10 / 15 days (Art. 73) | N/A (primary regime) |
| Cyber Resilience Act | Software products with digital elements | 24 hours early warning, 72 hours notification | Art. 15(5) deemed compliance for overlapping requirements |
| NIS2 Directive | Network and information systems of essential/important entities | 24 hours early warning, 72 hours full notification | No deemed compliance; parallel obligations |
| DORA | Financial entity ICT risk management | 4 hours initial, 72 hours intermediate | No deemed compliance; parallel obligations |
| GDPR | Personal data processing | 72 hours (Art. 33) | Separate regime; cumulative obligations |
NIS2 applies to essential and important entities across sectors including energy, transport, health, digital infrastructure, public administration, and ICT service management.
NIS2 applies to essential and important entities across sectors including energy, transport, health, digital infrastructure, public administration, and ICT service management. Organisations deploying high-risk AI systems in these sectors are likely subject to both NIS2 and Article 15 requirements. NIS2 mandates risk management measures covering incident handling, business continuity, supply chain security, network and information system security, and cybersecurity hygiene practices. Many of these overlap with Article 15, but the scope differs: NIS2 focuses on network and information systems broadly, while Article 15 focuses on AI system resilience specifically, including AI-specific threats that NIS2 does not address.
In practice, the AI system's cybersecurity controls should be built on top of the organisation's NIS2 compliance framework, adding AI-specific threat modelling, AI-specific testing, and AI-specific incident response procedures as extensions. Module 9 should reference the organisation's NIS2 risk management measures where they apply and document the AI-specific extensions that go beyond NIS2's scope.
NIS2 Article 21(2)(d) requires risk management measures addressing supply chain security. In most cases, the AI Act supply chain controls will satisfy NIS2's requirements for the AI system's portion of the supply chain, because AI-specific controls such as model provenance verification, embedding model sentinel testing, and SBOM generation are stricter than what NIS2 requires. Any gaps where NIS2 requires measures that the AI-specific controls do not address, such as supplier incident notification clauses or contractual audit rights mandated by a member state's transposition, should be documented by the AI System Assessor.
NIS2 requires entities to report significant incidents within a tiered timeline: an early warning within 24 hours, an incident notification within 72 hours providing an updated severity and impact assessment, and a final report within one month.
NIS2 requires entities to report significant incidents within a tiered timeline: an early warning within 24 hours, an incident notification within 72 hours providing an updated severity and impact assessment, and a final report within one month. Article 73 of the AI Act requires serious incident reporting to the market surveillance authority with separate timelines of two days for fundamental rights and critical infrastructure incidents, ten days for deaths, and fifteen days for other serious incidents.
A cybersecurity incident affecting a high-risk AI system can trigger both reporting obligations simultaneously. Consider a ransomware attack that compromises the inference service of a credit scoring system: it may constitute a significant incident under NIS2 and a serious incident under Article 73 if it causes systematic denial of essential services affecting fundamental rights or disruption to critical infrastructure management.
The incident response plan should include a dual-reporting decision tree. Upon triage, the team assesses whether the incident meets NIS2's significant incident threshold and whether it meets the serious incident definition under Article 3(49). NIS2's 24-hour early warning is typically the first deadline; Article 73's shortest deadline is two days. Content across both reports should be consistent, because contradictory statements to different authorities create legal exposure.
Several fields are common to both reporting regimes and should be prepared once in a shared template: entity identity, the system affected, the timeline of events, the nature and scope of the incident, immediate containment actions, and estimated impact. Regime-specific fields are then added. NIS2 requires the number of users affected, cross-border impact, and the CSIRT-specific format; Article 73 requires the suspected causal link between the AI system and the harm, the fundamental rights dimension, and the system's EU database registration reference.
The CRA applies to products with digital elements placed on the EU market, requiring manufacturers to conduct cybersecurity risk assessment, implement security by design, manage vulnerabilities throughout the product lifecycle, and report actively exploited vulnerabilities to ENISA within 24 hours.
The CRA applies to products with digital elements placed on the EU market, requiring manufacturers to conduct cybersecurity risk assessment, implement security by design, manage vulnerabilities throughout the product lifecycle, and report actively exploited vulnerabilities to ENISA within 24 hours. Whether the CRA applies to a given AI system is the first question to address, because the answer determines whether Article 15(5)'s deemed compliance pathway is available.
A high-risk AI system delivered as standalone software installed on the deployer's infrastructure qualifies as a product with digital elements under the CRA. An AI system embedded in a physical product such as a medical device, industrial control system, or autonomous vehicle component qualifies through the product it is part of. A purely cloud-hosted SaaS AI system consumed entirely via API may fall outside scope; Commission interpretation of the SaaS boundary is still evolving as of early 2026, and treating the system as within scope is the safer position.
Under the CRA, products are classified as default, important (Class I and Class II), or critical. Classification determines the conformity assessment route: default products use self-assessment, important products require EU-type examination or production quality assurance modules, and critical products require European cybersecurity certification. High-risk AI systems deployed in critical infrastructure, healthcare, or industrial control may qualify as important or critical products. This creates a conformity assessment interaction where the AI system may require internal assessment under the AI Act and third-party CRA cybersecurity conformity assessment under the CRA. Early identification of this overlap allows both processes to be planned in parallel.
Article 15(5) provides that compliance with the CRA's essential cybersecurity requirements is deemed to satisfy Article 15(4)'s resilience requirement, provided that the CRA conformity assessment covers the AI-specific aspects.
Article 15(5) provides that compliance with the CRA's essential cybersecurity requirements is deemed to satisfy Article 15(4)'s resilience requirement, provided that the CRA conformity assessment covers the AI-specific aspects. This is the single most consequential cross-regulatory provision in the cybersecurity domain for product-based AI systems.
An organisation with CRA conformity evidence does not need to duplicate the traditional cybersecurity work in Module 9. Network security, encryption, access control, and vulnerability management can be cross-referenced from the CRA conformity assessment. Module 9 then focuses its original content on the AI-specific extensions: adversarial ML testing, data poisoning controls, model-specific threat modelling, prompt injection defences, and other AI-native threats.
When the deemed compliance pathway applies, Module 9 should be structured in two layers. A first layer references the CRA conformity assessment and identifies which Article 15 sub-requirements are satisfied by the CRA evidence. A second layer documents the AI-specific cybersecurity measures that extend beyond the CRA's scope. Both the AI Act competent authority and the CRA notified body can then see clearly which requirements are addressed by which evidence.
Attention is needed on the "provided that" clause. A CRA assessment that evaluates only the product's network security, update mechanisms, and vulnerability handling, without evaluating the AI system's resilience to adversarial inputs, data poisoning, or model extraction, does not satisfy the condition. Verification that the CRA assessment scope explicitly includes the AI-specific threat categories from the threat model is essential. If it does not, deemed compliance cannot be relied upon for those categories, and Module 9 must address them independently.
DORA applies to financial entities including credit institutions, insurance undertakings, investment firms, payment institutions, and crypto-asset service providers, as well as their critical ICT third-party service providers.
DORA applies to financial entities including credit institutions, insurance undertakings, investment firms, payment institutions, and crypto-asset service providers, as well as their critical ICT third-party service providers. Financial sector organisations deploying high-risk AI systems for credit scoring, fraud detection, algorithmic trading, insurance underwriting, or customer due diligence are subject to both DORA and the AI Act.
DORA's requirements cover ICT risk management (Article 6), ICT-related incident classification and reporting (Articles 17-19), digital operational resilience testing (Articles 24-27), ICT third-party risk management (Articles 28-30), and information sharing (Article 45). Overlap with the AI Act's cybersecurity requirements is substantial.
| DORA Requirement | AI Act Counterpart | Single Control? |
|---|---|---|
| ICT risk management framework (Art. 6) | Art. 15 resilience; Art. 9 risk management | Partially; DORA scope extends to all ICT, AI risk management satisfies DORA for AI components only |
| ICT incident classification (Art. 17) | Art. 3(49) serious incident definition | Partially; a unified severity taxonomy can serve both, but classification criteria differ |
For AI systems consuming third-party model APIs, the model provider is an ICT third-party service provider under DORA.
For AI systems consuming third-party model APIs, the model provider is an ICT third-party service provider under DORA. The model selection process must therefore satisfy DORA's pre-contractual assessment requirements in addition to the AI Act's model origin risk analysis. The model selection record should document the DORA risk assessment for the provider covering financial stability, business continuity, security certifications, data handling, and concentration risk. It should also document the contractual provisions negotiated, including audit rights, security SLAs, data location, exit provisions, and sub-processor restrictions.
Every AI-related service provider should appear in the DORA third-party register: foundation model providers, embedding model providers, annotation services, cloud infrastructure hosting AI workloads, and managed ML services. Each entry requires the contractual and risk assessment information DORA mandates. Where a financial entity designates an AI model provider as a critical ICT third-party service provider, DORA's enhanced oversight requirements apply, including more intensive ongoing monitoring, enhanced contractual protections, and contingency planning for provider failure. A multi-provider strategy or a fallback to an internally hosted model may be necessary.
DORA Article 24 requires financial entities to establish digital operational resilience testing programmes covering vulnerability assessments, open-source software analysis, network security assessments, scenario-based testing, and penetration testing. Significant financial entities are additionally subject to threat-led penetration testing under Article 26, conducted by qualified external testers using intelligence-led methodologies such as TIBER-EU.
A cybersecurity event affecting a high-risk AI system can trigger reporting obligations under two, three, or all four of the regimes covered above.
A cybersecurity event affecting a high-risk AI system can trigger reporting obligations under two, three, or all four of the regimes covered above. The incident response plan must handle this multiplicity without the response team losing time to uncertainty about which authorities to notify, in which order, with which content.
The reporting timelines are as follows:
| DORA Requirement | AI Act Counterpart | Single Control? |
|---|---|---|
| ICT risk management framework (Art. 6) | Art. 15 resilience; Art. 9 risk management | Partially; DORA scope extends to all ICT, AI risk management satisfies DORA for AI components only |
| ICT incident classification (Art. 17) | Art. 3(49) serious incident definition | Partially; a unified severity taxonomy can serve both, but classification criteria differ |
| ICT incident reporting (Art. 19) | Art. 73 | No; different authorities, timelines, and content requirements |
| Digital operational resilience testing (Art. 24) |
Two additional regulatory instruments are relevant to the cybersecurity dimension of the AISDP for specific system categories.
Two additional regulatory instruments are relevant to the cybersecurity dimension of the AISDP for specific system categories. Neither is fully in force as of early 2026, but organisations should monitor their development.
The EU Data Act (Regulation (EU) 2023/2854), applicable from 12 September 2025, governs access to and use of data generated by connected products and related services. AI systems embedded in IoT devices or connected products may be affected by its data-sharing obligations. A cybersecurity dimension arises because the Data Act requires data holders to make data available to users and third parties under reasonable conditions, with security measures maintained during sharing. For AI systems processing IoT-generated data from industrial equipment, health monitoring devices, or connected vehicles, the Data Act may require sharing of training data, inference inputs, or operational data. The security architecture must accommodate these obligations without compromising the AI system's integrity. Secure data export mechanisms, access-controlled sharing interfaces, and audit logging of shared data are the primary controls.
The European Health Data Space Regulation establishes rules for the secondary use of electronic health data, including for AI training and development. High-risk AI systems in the healthcare domain that use health data will need to comply with EHDS security requirements for data access environments, including data minimisation, access controls, audit trails, and prohibition on re-identification.
Healthcare AI systems face a particularly dense regulatory overlay: the AI Act, NIS2, the CRA (if embedded in a medical device), the Medical Devices Regulation (Regulation (EU) 2017/745), GDPR, and the EHDS. Module 9 for such a system will be among the most complex in the organisation's portfolio, and a comprehensive regulatory mapping addressing all applicable regimes with integration points clearly identified is essential.
Not automatically. The deemed compliance pathway under Article 15(5) only applies if the CRA conformity assessment explicitly covers AI-specific threat categories such as adversarial ML testing, data poisoning, and model extraction. Traditional cybersecurity aspects are covered, but AI-specific extensions must be verified.
Both reporting streams run in parallel. NIS2's 24-hour early warning is typically the first deadline. A shared incident fact sheet is prepared once, with regime-specific annexes added for each authority. Article 73(9) may limit the AI Act obligation to fundamental rights infringements for NIS2-regulated entities.
Yes. Third-party model API providers are ICT third-party service providers under DORA and must appear in the DORA third-party register with full contractual and risk assessment information, including foundation model providers, embedding model providers, and managed ML services.
DORA requires an initial notification within four hours of classifying a major ICT-related incident. This is significantly shorter than NIS2's 24-hour early warning, the CRA's 24-hour early warning, or Article 73's two-day minimum.
Compliance with the CRA's essential cybersecurity requirements is deemed to satisfy Article 15(4)'s resilience requirement, provided the CRA conformity assessment explicitly covers AI-specific aspects such as adversarial ML testing and data poisoning controls.
Financial entities deploying high-risk AI systems for credit scoring, fraud detection, or algorithmic trading face parallel DORA and AI Act obligations covering ICT risk management, incident reporting, resilience testing, and third-party risk management.
A multi-regime decision tree at triage determines which reporting clocks are triggered. A shared incident fact sheet with regime-specific annexes is prepared, with DORA's four-hour deadline driving operational cadence.
The EU Data Act affects AI systems embedded in connected products through data-sharing obligations, and the European Health Data Space imposes additional security requirements for healthcare AI systems using health data.
For inspection readiness, the organisation should anticipate that a NIS2 audit may examine the AI system's cybersecurity controls from the NIS2 perspective, while an AI Act inspection examines the same controls from the Article 15 perspective. Module 9's evidence pack should be structured to serve both audiences without requiring reorganisation under time pressure. A regulator contact register should be maintained listing for each member state where the system is deployed: the AI Act market surveillance authority, the NIS2 competent authority or CSIRT, and the contact details and reporting portals for each. Regulator Interaction and Engagement
Article 73(9) provides significant practical relief for NIS2-regulated entities. Under this provision, entities already subject to NIS2 are limited to reporting fundamental rights infringements under Article 3(49)(c) through the AI Act; other serious incident categories are reported through NIS2. The organisation must confirm that the NIS2 transposition in the relevant member state covers the incident categories that Article 73 would otherwise require.
Module 9 should record the CRA scope determination, including the system's delivery model and the reasoning for or against CRA applicability, along with the CRA product classification and the resulting conformity assessment route. Conformity Assessment Planning
CRA vulnerability management obligations require manufacturers to identify, document, and remediate vulnerabilities, provide security updates for a defined support period, and report actively exploited vulnerabilities to ENISA on a 24-hour/72-hour/14-day timeline. An actively exploited vulnerability in a high-risk AI system may constitute both a CRA reportable event to ENISA and an AI Act serious incident to the market surveillance authority. Pre-drafted templates for both the ENISA early warning and the Article 73 initial report reduce preparation time when both deadlines are running concurrently.
The CRA also requires manufacturers to produce a software bill of materials documenting all components and vulnerabilities. For AI systems, SBOM practices should extend to ML-specific components such as pre-trained models, embedding models, and tokenisers that the CRA's software-oriented SBOM requirements do not explicitly address. Implementing guidance from the Commission may impose specific SBOM format requirements; SBOM generation tooling should be adjusted accordingly. The CRA requires ongoing SBOM updates throughout the product lifecycle and may require SBOM delivery to deployers.
| ICT incident reporting (Art. 19) | Art. 73 | No; different authorities, timelines, and content requirements |
| Digital operational resilience testing (Art. 24) | Art. 15(4) resilience; Annex IV testing documentation | Partially; DORA's programme can incorporate AI-specific testing but scope must be documented |
| Threat-led penetration testing (Art. 26) | Art. 15 resilience testing | Yes, if TLPT scope explicitly includes AI-specific attack scenarios |
| ICT third-party risk management (Art. 28-30) | Art. 15 supply chain | Partially; DORA's requirements for third-party registers and contractual provisions are more prescriptive |
| ICT third-party register (Art. 28(3)) | Annex IV component documentation | Yes; a single register can satisfy both requirements |
A system-specific version of this mapping should be produced during risk assessment, identifying which regimes apply and tailoring the integration approach.
Annual penetration tests should include both traditional application security targets and AI-specific targets such as model API endpoints, data pipeline endpoints, and human oversight interfaces. Testing reports should map each finding to both the DORA risk framework and the AI Act threat model. Red team exercises should include financial-sector-specific scenarios: adversarial manipulation of credit scoring outputs, model extraction for competitive advantage, data poisoning to influence lending decisions.
For TLPT, the threat intelligence phase should explicitly address AI-specific threat actors and techniques, using MITRE ATLAS alongside MITRE ATT&CK. TLPT scope should encompass adversarial inputs designed to manipulate financial decisions, model extraction attempts, data poisoning scenarios targeting the training pipeline, and prompt injection attacks for LLM-based systems. TLPT reports, shared with the financial supervisor, also serve as Module 9 evidence for the AI Act provided they are structured to address both regimes' expectations.
| Art. 15(4) resilience; Annex IV testing documentation |
| Partially; DORA's programme can incorporate AI-specific testing but scope must be documented |
| Threat-led penetration testing (Art. 26) | Art. 15 resilience testing | Yes, if TLPT scope explicitly includes AI-specific attack scenarios |
| ICT third-party risk management (Art. 28-30) | Art. 15 supply chain | Partially; DORA's requirements for third-party registers and contractual provisions are more prescriptive |
| ICT third-party register (Art. 28(3)) | Annex IV component documentation | Yes; a single register can satisfy both requirements |
DORA's four-hour deadline is the most aggressive. For a financial entity subject to both DORA and NIS2, that four-hour deadline drives the operational cadence. Classifying an incident and producing an initial notification within four hours requires that the triage process, classification criteria, and notification templates are rehearsed.
At the triage stage, a multi-regime decision tree should be activated. Four questions are answered in sequence: Does the incident meet DORA's major ICT-related incident criteria? Does it meet NIS2's significant incident criteria? Does it involve an actively exploited vulnerability in a CRA-scoped product? Does it meet the serious incident definition under Article 3(49)? Each positive answer starts the corresponding reporting clock.
A shared incident fact sheet should be prepared immediately containing fields common to all regimes: entity identity, system identity, incident timeline, nature and scope, containment actions, and initial impact assessment. Regime-specific annexes are then attached as each reporting deadline approaches. Tagging incidents with applicable regimes and tracking each regime's reporting deadline independently should be supported by the incident management platform. A single incident may carry four parallel deadline trackers.
Article 73(9) provides a simplification for sectors with existing equivalent reporting obligations. Entities subject to NIS2, DORA, or sector-specific legislation with equivalent requirements are limited to reporting fundamental rights infringements under Article 3(49)(c) through the AI Act. Other serious incident categories are reported through the sector-specific regime. This reduces the AI Act reporting burden without eliminating it: fundamental rights infringements such as systematic discrimination in credit decisions remain reportable under Article 73 even when the same incident is already reported elsewhere.
Organisations subject to multiple regimes simultaneously should produce a consolidated cross-regulatory cybersecurity mapping during risk assessment. The mapping should cover risk management, incident reporting, vulnerability management, supply chain security, penetration testing, security monitoring, business continuity, and conformity assessment across all applicable regimes. The integration approach for each domain should build on whichever framework is broadest, then extend with AI-specific threat categories.