We use cookies to improve your experience and analyse site traffic.
The EU AI Act establishes a four-tier risk classification framework that determines every obligation attaching to an AI system. Understanding where a system falls within this hierarchy is the precondition for all subsequent risk assessment, conformity assessment, and ongoing compliance activities. A classification error at this stage cascades into every downstream decision.
The EU AI Act establishes a four-tier risk classification framework that determines the obligations attaching to each AI system.
The EU AI Act establishes a four-tier risk classification framework that determines the obligations attaching to each AI system. Understanding where a system falls within this framework is the precondition for every subsequent risk assessment activity.
Tier 1 covers prohibited practices under Article 5. Systems deploying subliminal manipulation, exploiting vulnerabilities of specific groups, implementing social scoring by public authorities, performing untargeted facial recognition scraping, recognising emotions in workplaces or educational institutions outside narrow exceptions, assessing criminal risk solely through profiling, or performing real-time remote biometric identification in public spaces are prohibited. These systems cannot proceed through the aisdp process; their existence triggers immediate escalation and cessation.
Tier 2 covers high-risk systems under Annex III and Article 6. Systems falling within the eight Annex III domains, covering biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration and border control, and administration of justice, or constituting safety components of products governed by Annex I harmonisation legislation, require the full AISDP with all twelve modules, conformity assessment, CE marking, and EU database registration.
Tier 3 covers limited-risk systems under Article 50, triggering transparency obligations for chatbots, emotion recognition systems, biometric categorisation systems, and systems generating synthetic content. These require a Standard AISDP addressing transparency measures. Tier 4 covers minimal-risk systems that do not trigger any of the above categories, requiring only a Baseline AISDP confirming the classification rationale.
The Article 6(3) exception allows certain systems that would otherwise be classified as high-risk to be treated as lower risk if two conditions are both satisfied.
The Article 6(3) exception allows certain systems that would otherwise be classified as high-risk to be treated as lower risk if two conditions are both satisfied. The functional criterion requires the system's function to fall within specified categories: performing narrow procedural tasks, improving the results of previously completed human activities, or detecting decision-making patterns without replacing human assessment.
The risk criterion requires that the system does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Both criteria must be met; satisfying one alone is insufficient.
The AI System Assessor documents reliance on this exception with rigorous analysis addressing each criterion separately. Both the Legal and Regulatory Advisor and the AI Governance Lead must review and approve any claim. The risk assessment should treat the exception as a hypothesis to be tested against evidence, not as a convenient exit from compliance obligations.
The risk assessment must begin with a classification confirmation.
The risk assessment must begin with a classification confirmation. Before conducting any detailed risk analysis, the assessor verifies that the system's Classification Decision Record is current, that no reclassification triggers have been activated since the CDR was approved, and that the classification rationale remains sound given the system's current deployment context. A system that has drifted from its intended purpose into a higher-risk domain since classification requires reclassification before the risk assessment proceeds.
Classification is not a one-time determination. The Legal and Regulatory Advisor reassesses the classification when the system's intended purpose changes, when the deployment context changes, when the provider updates the system in ways that could affect the risk profile, and at minimum annually. Each reassessment is documented with reasoning and conclusion.
Yes. Reclassification triggers include changes to intended purpose, expansion into new deployment domains, modification of the affected population, or regulatory framework updates. A system that drifts into a higher-risk domain requires reclassification before the risk assessment proceeds.
A classification error cascades into every downstream compliance decision. The entire risk assessment, AISDP, and any conformity assessment built upon the wrong tier would be invalidated. Classification confirmation is a protective measure against this risk.
Both the Legal and Regulatory Advisor and the AI Governance Lead must review and approve. The AI System Assessor documents the analysis addressing each criterion separately, treating the exception as a hypothesis to test against evidence.
Yes. A Baseline AISDP confirming the classification rationale is required, documenting that every higher tier was considered and ruled out with stated reasoning. The classification record must be defensible.
Both a functional criterion (narrow procedural tasks or pattern detection) and a risk criterion (no significant harm to health, safety, or rights) must be met simultaneously. Satisfying one alone is insufficient.
A Classification Decision Record documenting the system description, assessment against each tier, supporting evidence, reviewer approval, and version history demonstrating active maintenance.