We use cookies to improve your experience and analyse site traffic.
Article 27 of the EU AI Act requires deployers of high-risk AI systems to conduct a fundamental rights impact assessment (FRIA) before putting the system into service. The FRIA is the single most demanding documentation obligation a deployer faces, evaluating the system's potential effects against the full range of Charter of Fundamental Rights protections.
Article 27 requires deployers of high-risk AI systems to conduct a fundamental rights impact assessment before putting the system into service.
Article 27 requires deployers of high-risk AI systems to conduct a fundamental rights impact assessment before putting the system into service. The fria is the single most demanding documentation obligation a deployer faces under the EU AI Act. It evaluates the system's potential effects on the full range of Charter of Fundamental Rights protections relevant to the deployment context. Unlike purely technical evaluations, the FRIA examines how real-world use of the system may affect the rights of individuals and groups who encounter its outputs.
The assessment must be completed before the system enters operational use, ensuring that deployers identify and address fundamental rights risks at the earliest possible stage. The Deployer Handbook provides the broader context for all deployer obligations under the Act.
The FRIA must assess the system's impact against every Charter of Fundamental Rights protection relevant to the deployment context.
The FRIA must assess the system's impact against every Charter of Fundamental Rights protection relevant to the deployment context. Rights may only be excluded where the deployer can demonstrate they are not engaged. The following table identifies key rights, their relevance to AI deployment, and commonly observed assessment gaps.
Article 1 of the Charter, concerning human dignity, applies to systems that reduce persons to data profiles or scores. Assessors frequently omit dignity and focus only on discrimination. Article 7, covering respect for private life, applies where systems process personal or behavioural data; this right is typically addressed in the DPIA but not adequately linked to the FRIA.
Article 21 on non-discrimination is the most commonly assessed right, applying to systems whose outputs affect access to employment, credit, insurance, education, or public services. However, analysis is often limited to a single axis. Article 24, the rights of the child, is frequently overlooked in educational and social services contexts where the affected population includes minors.
Article 31, on fair and just working conditions, is under-assessed in employment AI deployments involving workplace monitoring, task allocation, or performance management. Article 47, the right to an effective remedy, applies to systems whose outputs cannot be challenged or explained; it is structurally important yet often absent from FRIAs. Fundamental Rights in High-Risk AI explores these Charter provisions in greater detail.
The FRIA follows a structured six-step methodology that moves from scope definition through to stakeholder consultation.
The FRIA follows a structured six-step methodology that moves from scope definition through to stakeholder consultation. Each step builds on the preceding one, producing a cumulative record of analysis.
Step one is scope definition: identify the system, its intended purpose in the deployer's context, the categories of affected persons, and the decisions the system supports. The deployer must confirm that the deployment falls within the provider's documented intended purpose.
Step two is rights mapping. The deployer identifies every Charter right that could be affected. Rights are eliminated only where the deployer can demonstrate the right is not engaged by the specific deployment.
Step three is impact analysis. For each engaged right, the deployer assesses the nature, severity, and likelihood of adverse impact, considering both direct and indirect effects.
Step four is intersectional analysis. The deployer assesses whether the system's impact differs for persons at the intersection of multiple protected characteristics. A system may perform adequately for women and for ethnic minorities separately, yet produce adverse outcomes for women from a specific ethnic minority group.
Step five is mitigation assessment. For each identified impact, the deployer evaluates whether the provider's mitigations adequately address the risk in the deployer's context. Where they do not, the deployer defines supplementary controls with rationale and evidence of effectiveness.
Step six is stakeholder consultation. The deployer consults affected groups or their representatives and demonstrates how stakeholder input influenced the assessment's conclusions.
Where the AI system processes personal data, a Data Protection Impact Assessment under GDPR Article 35 will also be required alongside the FRIA.
Where the AI system processes personal data, a Data Protection Impact Assessment under GDPR Article 35 will also be required alongside the FRIA. The two assessments should be conducted in parallel and cross-referenced, as they address different dimensions: the DPIA assesses data processing risk, while the FRIA assesses fundamental rights impact.
A common weakness is conducting a single assessment that conflates the two perspectives. The Legal and Regulatory Advisor should maintain both as separate but cross-referenced documents. This ensures that neither the data protection analysis nor the fundamental rights analysis is diluted by merging them into one exercise. Data Governance for High-Risk AI covers the data protection dimension in detail.
Article 27(4) requires the deployer to notify the relevant market surveillance authority of the FRIA results.
Article 27(4) requires the deployer to notify the relevant market surveillance authority of the FRIA results. Public authority deployers must also publish a summary of the assessment. These obligations ensure transparency and enable regulatory oversight of how deployers have evaluated the fundamental rights implications of their AI systems.
Where the provider has not supplied disaggregated fairness data, the deployer can generate it from operational data.
Where the provider has not supplied disaggregated fairness data, the deployer can generate it from operational data. If the system has been in service, the deployer's own records of system outputs, combined with demographic information about affected persons obtained in GDPR compliance, provide the basis for a deployer-side fairness analysis.
The analysis need not replicate the provider's full model evaluation. It should assess whether real-world outputs, in the deployer's operational context, produce disparate impact across protected characteristic subgroups. Risk Classification and Management provides further context on proportionate assessment approaches.
The intersectional analysis is technically challenging because the number of intersectional subgroups grows combinatorially. A practical approach is to evaluate all single-axis subgroups first, identify those showing elevated risk, and then evaluate the intersections of elevated-risk groups. This keeps the analysis tractable without omitting material intersections.
The FRIA can be conducted without specialised tooling by following a structured template approach.
The FRIA can be conducted without specialised tooling by following a structured template approach. A structured FRIA template walks the assessor through the six steps described above. The Charter rights mapping is completed manually, using the rights table as the starting reference.
The impact analysis is conducted through structured interviews with operational staff, affected person representatives, and domain experts. The output is a written report, reviewed by the Legal and Regulatory Advisor, approved by the AI Governance Lead, and retained in Module D3 of the deployer compliance record. This procedural approach ensures that deployers without access to automated assessment tools can still meet their Article 27 obligations in full.
Yes. Article 27 requires deployers to conduct the fundamental rights impact assessment before putting the high-risk AI system into service.
They should not be. While conducted in parallel, the FRIA and DPIA address different dimensions and should be maintained as separate but cross-referenced documents to avoid conflating data processing risk with fundamental rights impact.
A practical approach is to evaluate all single-axis subgroups first, identify those showing elevated risk, and then evaluate the intersections of elevated-risk groups, keeping the analysis tractable without omitting material intersections.
The FRIA and DPIA should be conducted in parallel and cross-referenced as separate documents, since they address different dimensions: data processing risk and fundamental rights impact respectively.
Yes. A structured template walks assessors through the six steps, with manual Charter rights mapping, structured interviews, and a written report reviewed by the Legal and Regulatory Advisor.
Deployers can generate fairness data from their own operational records combined with demographic information, assessing whether real-world outputs produce disparate impact across protected subgroups.