We use cookies to improve your experience and analyse site traffic.
Article 15 of the EU AI Act requires high-risk AI systems to achieve an appropriate level of cybersecurity. DevSecOps integration embeds security controls into every stage of the CI/CD pipeline, from threat modelling and static analysis through to runtime protection and forensic log retention.
DevSecOps integration embeds security throughout the AI development lifecycle rather than treating it as a final-stage gate.
DevSecOps integration embeds security throughout the AI development lifecycle rather than treating it as a final-stage gate. The engineering team integrates security practices into every stage of the CI/CD pipeline, from initial threat modelling through to runtime protection. This approach ensures that cybersecurity measures are not bolted on at the end but woven into the fabric of development, deployment, and operations. For the broader cybersecurity framework that DevSecOps supports, see Cybersecurity Measures for High-Risk AI.
Threat modelling should be conducted before development begins, using frameworks such as STRIDE, PASTA, or an equivalent methodology.
Threat modelling should be conducted before development begins, using frameworks such as STRIDE, PASTA, or an equivalent methodology. The threat model identifies the system's attack surfaces, including data ingestion APIs, user interfaces, model serving endpoints, and administrative interfaces.
It also identifies relevant threat actors: external attackers, malicious insiders, compromised deployers, and adversarial users. For each attack surface, the team documents the applicable threat scenarios and the countermeasures that address them.
The Technical SME maintains the threat model as a living document, reviewed whenever the system's architecture or deployment context changes. The threat model feeds directly into Module 9 of the ai system description package, linking security analysis to the compliance documentation chain. Threat modelling also intersects with the broader Technical Documentation and Record-Keeping requirements.
The CI pipeline should incorporate multiple layers of automated security scanning to detect vulnerabilities before code reaches production.
The CI pipeline should incorporate multiple layers of automated security scanning to detect vulnerabilities before code reaches production. Static application security testing (SAST) tools such as Bandit, SonarQube, or Semgrep scan source code for common vulnerability patterns including injection, authentication flaws, and insecure defaults.
Dynamic application security testing (DAST) tests running application instances for vulnerabilities that static analysis cannot detect. Software composition analysis (SCA) checks third-party dependencies against vulnerability databases, ensuring that known-vulnerable libraries are flagged before integration.
Container image scanning with tools such as Trivy, Grype, or Snyk Container checks base images and installed packages for known vulnerabilities. Infrastructure-as-code security scanning with Checkov, tfsec, or KICS validates that infrastructure definitions follow security best practices. Together, these scanning layers create a defence-in-depth approach within the CI/CD Pipeline Design itself.
Pre-deployment security verification should confirm four conditions before any candidate release proceeds to production.
Pre-deployment security verification should confirm four conditions before any candidate release proceeds to production. First, no critical or high-severity vulnerabilities exist in the candidate release. Second, container images are signed and verified.
Third, the infrastructure configuration passes security policy checks. Fourth, access controls are correctly configured for the deployment environment. These gates ensure that security deficiencies identified during CI scanning are not bypassed during deployment.
Production environments should implement multiple overlapping runtime protections.
Production environments should implement multiple overlapping runtime protections. Runtime application self-protection (RASP) provides in-process defence against exploitation attempts. Intrusion detection and prevention systems monitor network traffic for known attack patterns.
Anomaly-based monitoring detects unusual access patterns that may indicate compromise or misuse. Container runtime security tools such as Falco, Sysdig Secure, or equivalent solutions detect unexpected process execution, file system modifications, or network connections within containers. These controls complement the static and dynamic testing performed earlier in the pipeline, closing the loop on vulnerabilities that emerge only at runtime.
Security monitoring for AI systems requires augmenting traditional security information and event management (SIEM) with AI-specific detection capabilities.
Security monitoring for AI systems requires augmenting traditional security information and event management (SIEM) with AI-specific detection capabilities. The SIEM should ingest logs from every layer of the AI system architecture: infrastructure logs, application logs, API gateway logs, model inference logs, data pipeline execution logs, and human oversight interface access logs.
Standard SIEM correlation rules for brute-force detection, privilege escalation, and anomalous login locations are supplemented by AI-specific rules. A sudden increase in inference API calls from a single consumer may indicate a model extraction attempt. A pattern of systematically varied inputs may indicate adversarial probing. Changes to model artefact files outside the CI/CD pipeline indicate unauthorised modification, and access to training data outside scheduled pipeline execution windows is suspicious.
User and entity behaviour analytics (UEBA) should establish baselines for normal access patterns across the AI system's components. Deviations from baseline, including unusual data access volumes, off-hours model deployments, and access to model artefacts by non-engineering personnel, should generate alerts for investigation.
The SIEM and the post-market monitoring system operate on overlapping data streams. The SIEM focuses on security events whilst the PMM system focuses on performance, fairness, and drift. Both should share a common alerting framework so that an event detected by either system reaches the appropriate response team. A sudden performance degradation detected by the PMM system may have a security cause such as data poisoning or model tampering, and a security alert may have compliance implications such as fairness violations resulting from a compromised feature pipeline. For the full post-market monitoring framework, see .
Most AI systems are deployed on cloud infrastructure, introducing a shared responsibility model where the cloud provider secures the infrastructure layer and the organisation secures the application, data, and configuration layers.
Most AI systems are deployed on cloud infrastructure, introducing a shared responsibility model where the cloud provider secures the infrastructure layer and the organisation secures the application, data, and configuration layers. Misconfigurations in cloud environments are among the most common sources of security breaches, and AI systems are particularly exposed because they often require broad data access permissions and high-compute resources.
A documented security configuration baseline should define the required settings for every cloud service used by the AI system: storage bucket access policies, database network restrictions, compute instance security groups, IAM role definitions, and logging configurations. The Technical Owner codifies the baseline as infrastructure-as-code and enforces it through automated scanning.
Cloud security posture management (CSPM) tools should continuously assess the deployed infrastructure against the configuration baseline and against industry benchmarks such as CIS Benchmarks and cloud provider security best practices. Drift from the baseline should generate alerts, and critical deviations such as a training data bucket becoming publicly accessible should trigger immediate remediation.
Cloud IAM configurations tend to accumulate excessive permissions over time as development teams request broad access during development and never narrow it for production.
Cloud IAM configurations tend to accumulate excessive permissions over time as development teams request broad access during development and never narrow it for production. Regular IAM access reviews, at minimum quarterly, should verify that every service account, role, and user has the minimum permissions required for their current function. The security team deactivates unused credentials and documents and justifies any cross-account access.
Credential revocation is a critical component of the system end-of-life process. All production credentials, API keys, service accounts, and access tokens must be revoked at decommission. For systems integrated with deployer infrastructure, deployer-side credentials must also be addressed.
Data residency requirements apply to AI systems processing personal data of EU residents. Training data, inference logs, and model artefacts are stored in EU regions by the Technical Owner unless a lawful transfer mechanism exists. Cloud provider configurations should enforce region restrictions at the policy level, preventing accidental data replication to non-compliant regions. These sovereignty controls intersect with the broader cybersecurity posture described in Cybersecurity Measures for High-Risk AI.
The engineering team retains security logs for a period that satisfies both the AI Act's ten-year documentation obligation and any applicable sector-specific requirements.
The engineering team retains security logs for a period that satisfies both the AI Act's ten-year documentation obligation and any applicable sector-specific requirements. Logs are stored in append-only, tamper-evident storage to ensure their admissibility as evidence in regulatory proceedings or incident investigations. Security documentation covers the chain of custody for log evidence, maintaining auditability throughout the retention period.
The CI pipeline should include static application security testing (SAST), dynamic application security testing (DAST), software composition analysis (SCA), container image scanning, and infrastructure-as-code security scanning to create a defence-in-depth approach.
IAM access reviews should be conducted quarterly at minimum, verifying that every service account, role, and user has the minimum permissions required for their current function, with unused credentials deactivated.
AI-specific SIEM rules should detect sudden increases in inference API calls indicating model extraction attempts, patterns of systematically varied inputs indicating adversarial probing, changes to model artefact files outside the CI/CD pipeline, and access to training data outside scheduled execution windows.
The CI pipeline should include SAST, DAST, software composition analysis, container image scanning, and infrastructure-as-code security scanning.
Traditional SIEM correlation rules are supplemented with AI-specific rules detecting model extraction attempts, adversarial probing, unauthorised artefact modification, and suspicious training data access.
CSPM tools continuously assess deployed infrastructure against a documented configuration baseline and industry benchmarks, alerting on drift and triggering remediation for critical deviations.
Security logs must be retained for a period satisfying the AI Act's ten-year documentation obligation, stored in append-only tamper-evident storage with documented chain of custody.