Skip to main content

Command Palette

Search for a command to run...

Fundamental Rights Impact Assessment (FRIA): A Practical Guide to EU AI Act Article 27

Updated
6 min read
Fundamental Rights Impact Assessment (FRIA): A Practical Guide to EU AI Act Article 27

Article 27 of the EU AI Act introduces a new obligation for deployers of high-risk AI systems: the Fundamental Rights Impact Assessment (FRIA). The AI Office is supposed to publish a template. They haven't. The obligation applies anyway from August 2026.

Here's what you need to do.

What Article 27 Requires

Article 27 applies to:

  • Public sector bodies deploying high-risk AI systems
  • Private entities providing public services (including privatized services)
  • Deployers of AI used for creditworthiness (Annex III point 5(b))
  • Deployers of AI used for insurance risk/pricing (Annex III point 5(c))

Before first use of a high-risk AI system, these deployers must perform a FRIA covering:

  1. Process description — How the system will be used
  2. Timing and frequency — When and how often it's used
  3. Affected categories — Who is likely to be impacted
  4. Specific harm risks — What could go wrong for the affected categories
  5. Human oversight measures — How humans supervise the system
  6. Mitigation measures — What happens when risks materialize

The FRIA must be notified to the market surveillance authority.

Why This Is Different from a DPIA

A Data Protection Impact Assessment (DPIA) focuses on data protection under GDPR. A FRIA is broader — it covers the full spectrum of fundamental rights under the EU Charter.

Overlapping but distinct rights assessed in a FRIA:

Charter ArticleRight
Art. 1Human dignity
Art. 8Protection of personal data
Art. 11Freedom of expression
Art. 15Freedom to choose an occupation
Art. 20Equality before the law
Art. 21Non-discrimination
Art. 26Integration of persons with disabilities
Art. 34Social security and social assistance
Art. 35Health care
Art. 47Right to an effective remedy

The disability rights dimension (Article 26) is underemphasized in most guidance documents. It's critical — especially when combined with Article 16(l) which requires accessibility of high-risk AI systems.

The DIHR/ECNL Methodology (5 Phases)

The Danish Institute for Human Rights (DIHR) and the European Center for Not-for-Profit Law (ECNL) published an operational guide in December 2025. It's the most practical framework available. Five phases:

Phase 1: Planning and Scoping

  • Timing: ideally pre-procurement
  • Budget allocation for the process
  • Multidisciplinary team composition (in-house, external, or hybrid)
  • Context analysis: deployment context, system features, governance

Phase 2: Assess and Mitigate Negative Impacts

  • Develop "typical" and "worst-case" scenarios
  • Map scenarios against affected fundamental rights
  • Assess severity and likelihood using:
    • Scope of impact
    • Gravity of harm
    • Irreversibility of consequences
    • Vulnerability of affected groups
  • Define mitigation measures:
    • Organisational safeguards (governance, training, oversight)
    • Technical safeguards (bias detection, accuracy monitoring)
    • Contractual safeguards (provider obligations, SLAs)

Phase 3: Deployment Decision and Public Reporting

  • Framework for deciding when to NOT deploy
  • Notification to market surveillance authority
  • Public reporting of FRIA results

Phase 4: Monitoring and Review

  • Ongoing post-deployment monitoring
  • Periodic review and FRIA updates
  • Comparison of predicted vs actual impacts

Phase 5: Consulting Affected Groups (Cross-cutting)

This phase applies throughout all other phases. Meaningful consultation with:

  • Affected communities
  • Civil society organizations
  • Domain experts
  • Disability rights organizations (especially relevant for Article 26)

Common Mistakes

1. Treating it as a compliance checkbox

A FRIA is a governance mechanism, not a document to file and forget. It must inform deployment decisions, not justify them after the fact.

2. Generic rights analysis

Every FRIA must be specific to the AI system. "The system could impact non-discrimination" is meaningless. What specific groups? What specific discrimination mechanisms? What evidence?

3. Skipping disability impact

Article 26 of the Charter and Article 9(9) of the AI Act explicitly require considering impact on persons with disabilities. Most FRIA templates don't emphasize this. Your system might pass bias testing for protected attributes but still fail persons with cognitive disabilities who can't understand the decision explanation.

4. No real stakeholder engagement

"We consulted our diversity team" is not stakeholder engagement. Real engagement means talking to the people affected by the system before deployment, and documenting how their feedback shaped design decisions.

5. Ignoring the "typical" scenario

Worst-case thinking is popular but insufficient. You also need to assess the "typical" scenario — what happens when everything works as designed. Sometimes the typical case has harmful normalized effects (e.g., algorithmic sorting that embeds existing biases without triggering outlier detection).

How to Generate a FRIA Now

The AI Office template isn't published. We built an open source FRIA generator based on the DIHR/ECNL methodology with integrated disability impact:

import { generateFRIA } from '@eucompliance/fria-generator'
import { classify } from '@eucompliance/ai-act-classifier'

const classification = classify({
  name: 'Benefits Assessor',
  purpose: 'Social welfare eligibility assessment',
  affectedPersons: ['public'],
  decisionMaking: 'semi_automated',
})

const fria = generateFRIA({
  systemName: 'Benefits Assessor',
  purpose: 'Automated assessment of welfare benefit eligibility',
  classification,
  deployerOrganization: 'Municipality of Barcelona',
  sector: 'Public administration',
  country: 'Spain',
  affectedPopulation: 'Citizens applying for social welfare benefits',
  scale: 'regional',
  makesDecisions: true,
  publicSector: true,
})

// fria.generatedReport is a full Markdown FRIA report

The generator assesses 10 fundamental rights, identifies disability-specific risks for each, and produces mitigation priorities. It's a starting point — not a replacement for real stakeholder engagement.

A Concrete Example

Let's apply this to a welfare benefits AI:

System: Benefits Assessor Deployer: Municipality Affected population: Citizens applying for welfare

High-relevance rights (from the generator):

  1. Article 34 (Social security) — HIGH

    • Risk: Eligible individuals may be wrongly denied
    • Risk: Fraud detection may create false positives among vulnerable groups
    • Mitigation: Accessible appeals process, human review of all denials
  2. Article 21 (Non-discrimination) — HIGH

    • Risk: Training data may contain historical biases
    • Risk: Outcomes may disproportionately affect protected groups
    • Mitigation: Bias testing, ongoing monitoring, accessible complaint mechanism
  3. Article 26 (Disability integration) — HIGH

    • Risk: Interface may not be accessible
    • Risk: Decision-making may not account for disability circumstances
    • Mitigation: AI Accessibility Impact Assessment, EN 301 549 compliance, consultation with disability orgs
  4. Article 47 (Effective remedy) — HIGH

    • Risk: Opaque AI decisions difficult to challenge
    • Mitigation: Clear explanation of decision basis, accessible complaint process

Disability-specific risks across multiple rights: 6 identified

Article 27(3) applicable: Yes → must notify market surveillance authority

Public Reporting

Article 27(5) requires the AI Office to publish a template, but pending publication, the expectation is that deployers publish FRIA results publicly or at least summary versions.

Good practice:

  • Publish on your organization's website
  • Include in annual reports
  • Share with affected communities proactively
  • Update when the FRIA is reviewed

Next Steps

  1. Determine if Article 27 applies to you (public sector or creditworthiness/insurance deployer)
  2. Read the DIHR/ECNL guide (free PDF, linked below)
  3. Generate a draft FRIA using our free tool
  4. Engage stakeholders — especially disability rights organizations
  5. Iterate and document — a FRIA is a living instrument

Resources


The FRIA generator is part of eucompliance, the open source toolkit bridging EU AI Act and European Accessibility Act compliance. EUPL-1.2 license. Star the repo if this helped.

More from this blog

R

Regulia

9 posts

The EU compliance blog for SMEs. Articles about EAA accessibility, NIS2 cybersecurity, AI Act governance, and the open source tools that power it all.