We use cookies to improve your experience and analyse site traffic.
Article 27 of the EU AI Act requires deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment before putting the system into service. The FRIA provides a rights-based lens on risk, examining the system from the perspective of affected persons rather than from the provider's engineering standpoint. This page covers the full FRIA methodology, intersectional analysis requirements, stakeholder engagement, and integration with the Article 9 risk management system.
Article 27 requires deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment before putting the system into service.
Article 27 requires deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment before putting the system into service. For organisations that are both provider and deployer, the fria provides a second lens on risk, examining the system from the perspective of the persons and groups it affects rather than from the provider's engineering perspective.
The FRIA addresses distinct questions from the provider's risk management system under Article 9. It examines the deployer's specific processes in which the system will operate, the categories of natural persons and groups likely to be affected, the specific risks of harm to those categories, the human oversight measures in the deployment context, and the measures to be taken if risks materialise.
The assessment follows six steps. Step 1 describes the AI system and its context of use. Step 2 identifies the fundamental rights at stake. Step 3 assesses the risks to those rights. Step 4 evaluates existing safeguards. Step 5 assesses proportionality and necessity. Step 6 identifies additional mitigation measures. The outputs feed into aisdp Module 11, the risk register in Module 6, and the market surveillance authority notification.
The assessment must cover all fundamental rights recognised under the EU Charter that could plausibly be affected by the system's operation.
The assessment must cover all fundamental rights recognised under the EU Charter that could plausibly be affected by the system's operation. The FRIA must not be treated as a template exercise; the AI System Assessor tailors it to the specific system, deployment context, and affected populations.
For a recruitment screening system, relevant rights include non-discrimination under Charter Article 21, freedom to choose an occupation under Article 15, protection of personal data under Article 8, the right to an effective remedy under Article 47, and the right to good administration under Article 41. For a credit scoring system, the right to property under Article 17 and the prohibition of discrimination in access to services become relevant.
The FRIA must assess intersectional vulnerability where multiple factors such as age, disability, economic situation, and ethnic origin compound in the same individuals or groups.
The FRIA must assess intersectional vulnerability where multiple factors such as age, disability, economic situation, and ethnic origin compound in the same individuals or groups. A recruitment system performing adequately for each protected characteristic in isolation may still produce discriminatory outcomes for persons belonging to multiple disadvantaged groups simultaneously. The assessment documents the methodology used to identify intersectional effects, the data available to test for them, and any compensating controls where data limitations prevent comprehensive analysis.
Article 27 implicitly requires meaningful engagement with affected persons or their representatives. The FRIA documents which stakeholders were consulted, how input was gathered, what concerns were raised, and how those concerns were addressed in risk treatment measures. Generic stakeholder statements are insufficient; the consultation must produce actionable insights that influence the assessment's conclusions.
Article 27(4) requires deployers to notify the relevant market surveillance authority of the FRIA results.
Article 27(4) requires deployers to notify the relevant market surveillance authority of the FRIA results. Notification procedures, timelines, and formats are documented and tested by the Legal and Regulatory Advisor before the system goes live. Public authority deployers must also publish a summary of the FRIA.
The AI System Assessor integrates FRIA findings with the provider's risk register in Module 6. Risks identified through the FRIA that were not captured in the provider's analysis are added. Where the FRIA identifies risks that the provider's mitigations do not adequately address, the deployer implements supplementary controls and documents them in Module 11.
The FRIA and the GDPR Data Protection Impact Assessment should be conducted in parallel and cross-referenced where the system processes personal data. They address different dimensions: the DPIA assesses data processing risk; the FRIA assesses fundamental rights impact across the full range of Charter protections extending well beyond non-discrimination.
No. The Article 9 assessment examines what can fail from an engineering standpoint, while the FRIA examines which fundamental rights of affected persons could be infringed. Organisations holding both provider and deployer roles must conduct both, cross-referencing findings at completion.
The FRA's 2023 methodology provides a structured framework, but the content must be tailored to the specific system and deployment context. A generic rights listing without system-specific analysis will not satisfy Article 27.
A dedicated FRIA lead, typically the DPO Liaison or a fundamental rights specialist, coordinates the assessment independently from the technical risk team. External stakeholders including affected persons, civil society representatives, and domain experts participate through structured consultations.
Whenever the deployment context changes materially, the affected population shifts, or new evidence emerges about the system's impact on fundamental rights. Each review follows the same six-step methodology and produces an updated AISDP Module 11.
The rationale for inaction must be recorded and reviewed by the AI Governance Lead. Every FRIA finding must have a corresponding risk register entry or a documented rationale for exclusion. Undocumented decisions to ignore stakeholder concerns undermine the FRIA's credibility.
It reveals discriminatory outcomes where multiple vulnerability factors compound in the same individuals, which single-axis analysis misses. Subgroups below minimum cell sizes are flagged as inconclusive, not omitted.
The FRA's 2023 six-step methodology: describe the system, identify rights at stake, assess risks, evaluate safeguards, assess proportionality, and identify additional mitigations. Each step feeds into AISDP Module 11.
Run in parallel with a structured cross-reference at completion. FRIA-identified risks not in the provider's analysis are added to Module 6. Supplementary controls are documented in Module 11.
Fairlearn's MetricFrame for multi-dimensional metric computation and Aequitas for automated report generation, both computing fairness metrics across all combinations of defined sensitive features.