We use cookies to improve your experience and analyse site traffic.
The first governance gate evaluates whether an AI system's risk classification remains valid and whether pipeline controls match the applicable risk tier. It fires on initial onboarding, characteristic changes, and scheduled annual re-evaluation, protecting against classification drift that incremental changes can cause.
The first governance gate fires when a new AI system enters the pipeline or when an existing system's characteristics change in a way that could affect its risk classification.
The first governance gate fires when a new AI system enters the pipeline or when an existing system's characteristics change in a way that could affect its risk classification. The gate evaluates whether the system's classification remains valid and whether the pipeline's compliance controls are correctly configured for the applicable risk tier.
Three events trigger the gate. First, a new system's initial onboarding into the governance pipeline, when the system is being prepared for its first deployment and the classification must be established before any compliance controls can be configured. Second, a change to the system's intended purpose, target population, deployment context, or model architecture, any of which could alter the risk classification and therefore the applicable compliance requirements. Third, a scheduled periodic re-evaluation at minimum annually as specified in the governance policy, ensuring that classification does not become stale as the regulatory landscape evolves.
The AI System Assessor reviews the system against the classification criteria from the risk assessment framework.
The AI System Assessor reviews the system against the classification criteria from the risk assessment framework. The gate's structured output records the system identifier, the Annex III area or the determination that the system is not high-risk, the Article 6(3) exception assessment if applicable covering the functional criterion and the risk criterion, the Article 5 prohibition screening result confirming the system does not fall within any prohibited category, and the aisdp tier as Full, Standard, or Baseline.
The gate fails under three conditions. It fails if the classification cannot be determined with confidence because the system's characteristics are ambiguous or the regulatory guidance is unclear. It fails if the Article 5 screening identifies a prohibited practice, which requires immediate cessation rather than continued pipeline processing. It fails if the system's pipeline configuration does not match its determined risk tier, meaning the compliance controls currently applied are insufficient or excessive for the system's actual classification.
The gate produces a Classification Gate Record linked to the Classification Decision Record described in the risk classification framework.
The gate produces a Classification Gate Record linked to the Classification Decision Record described in the risk classification framework. The record captures the complete evaluation including the trigger event that initiated the gate, the classification determination with supporting rationale, each screening result from the Article 5 and Article 6(3) assessments, and the pipeline configuration verification confirming the correct compliance controls are active for the determined risk tier.
The record is deposited in the governance artefact registry and referenced by AISDP Module 1 (System Identity). For subsequent re-evaluations, the record also captures the comparison against the previous classification, documenting whether the classification has changed and, if so, the specific characteristics that drove the change.
Three roles share accountability for the classification gate's operation, ensuring that technical, legal, and governance perspectives are all represented in the determination.
Three roles share accountability for the classification gate's operation, ensuring that technical, legal, and governance perspectives are all represented in the determination. The AI System Assessor conducts the evaluation, examining the system against the classification criteria and producing the structured output with supporting evidence.
The Legal and Regulatory Advisor reviews the Article 5 prohibition screening and the Article 6(3) exception assessment, providing the legal interpretation needed for borderline cases where the classification could reasonably go either way. The Legal and Regulatory Advisor's review is particularly important for systems that fall near the boundary between high-risk and limited-risk, where the Article 6(3) exception criteria require judgement about whether the system performs narrow procedural tasks, improves previously completed human activities, or detects patterns without replacing human assessment.
The AI Governance Lead approves the classification determination, taking accountability for the final decision. This approval is recorded in the Classification Gate Record with the approver's identity, timestamp, and any conditions attached. Where the classification is disputed between the Assessor and the Legal and Regulatory Advisor, the AI Governance Lead resolves the disagreement and documents the rationale for the chosen interpretation.
The gate fails. The system cannot proceed through the governance pipeline until classification is resolved. The AI System Assessor documents the ambiguity and the AI Governance Lead determines the path to resolution.
At minimum annually through scheduled re-evaluation, plus whenever the system's intended purpose, target population, deployment context, or model architecture changes.
The AISDP tier (Full, Standard, or Baseline) determines the level of documentation and compliance controls applied. A mismatch between classification and pipeline tier configuration means the system may be under-governed for its risk level.