FRANCE: DISCRIMINATORY ALGORITHM USED BY THE SOCIAL SECURITY AGENCY MUST BE STOPPED

Friday, October 18, 2024


The French authorities must immediately stop the use of a discriminatory risk-scoring algorithm used by the French Social Security Agency’s National Family Allowance Fund (CNAF), which is used to detect overpayments and errors regarding benefit payments, Amnesty International said today.

On 15 October, Amnesty International and fourteen other coalition partners led by La Quadrature du Net (LQDN) submitted a complaint to the Council of State, the highest administrative court in France, demanding the risk-scoring algorithmic system used by CNAF be stopped.  

“From the outset, the risk-scoring system used by CNAF treats individuals who experience marginalization – those with disabilities, lone single parents who are mostly women, and those living in poverty – with suspicion. This system operates in direct opposition to human rights standards, violating the right to equality and non-discrimination and the right to privacy,” said Agnès Callamard, Secretary General at Amnesty International.

In 2023, La Quadrature du Net (LQDN) got access to versions of the algorithm’s source code – a set of instructions written by programmers to create a software – thereby exposing the discriminatory nature of the system.

Since 2010, CNAF has used a risk-scoring algorithm to identify people who are potentially committing benefits fraud by receiving overpayments. The algorithm assigns a risk score between zero and one to all recipients of family and housing benefits. The closer the score is to one, the higher the probability of being flagged for investigation.

Overall, there are 32 million people in France living in households that receive a benefit from CNAF. Their sensitive personal data, as well as that of their family, is processed periodically, and a risk score is assigned.

The criteria that increase one’s risk score include parameters which discriminate against vulnerable households, including being on a low income, being unemployed, living in a disadvantaged neighbourhood, spending a significant portion of income on rent, and working while having a disability. The details of those who are flagged due to having a high-risk score are compiled into a list that is investigated further by a fraud investigator.

“While authorities herald the rollout of algorithmic technologies in social protection systems as a way to increase efficiency and detect fraud and errors, in practice, these systems flatten the realities of people’s lives. They work as extensive data-mining tools that stigmatize marginalized groups, and invade their privacy,” said Agnès Callamard.

Amnesty International did not investigate specific cases of people flagged by the CNAF system. However, our investigations in Netherlands and Serbia suggest that using AI-powered systems and automation in the public sector enables mass surveillance: the amount of data that is collected is disproportionate to the purported aim of the system. Moreover, evidence by Amnesty International also exposed how many of these systems have been quite ineffective at actually doing what they purport to do—whether it be identifying fraud or errors in the benefits system.

While authorities herald the rollout of algorithmic technologies in social protection systems as a way to increase efficiency and detect fraud and errors, in practice, these systems flatten the realities of people’s lives. They work as extensive data-mining tools that stigmatize marginalized groups, and invade their privacy.

Agnès Callamard, Secretary General, Amnesty International

It has also been argued that the scale of errors or fraud in benefits system has been exaggerated to justify the development of such tech systems, often leading to discriminatory or racist or sexist targeting of particular groups, particularly migrants and refugees.

Over the past year, France has been actively promoting itself internationally as the next hub for artificial intelligence (AI) technologies, culminating in a summit scheduled for February 2025. At the same time, France has also been legalizing mass surveillance technologies and has consistently undermined the EU’s AI Act negotiations.

“France is relying on a risk-scoring algorithmic system for social benefits that highlights, sustains and enshrines the bureaucracy’s prejudices and discrimination. Instead, France should ensure that it complies with its human rights obligations in the first place that of non-discrimination. The authorities must address current and existing AI-related harms amid the country’s quest to become a global AI hub,” said Agnès Callamard.

Under the newly adopted European Artificial Intelligence Regulation (AI Act), AI systems used by authorities to determine access to essential public services and benefits are considered to pose high risk to rights, health and safety of people. Therefore, they must meet strict technical, transparency and governance rules, including an obligation on deployers to carry out an assessment of human rights risks and guarantee mitigation measures before deployment.

In the meantime, certain systems, such as those used for social scoring, are considered to pose unacceptable level of risk and therefore must be banned.

It is unfortunate that EU lawmakers have been vague in explicitly defining social scoring within the AI Act. The European Commission must ensure that its upcoming guidelines provide a clear and enforceable interpretation of the social scoring ban, especially as it applies to discriminatory fraud detection and risk-scoring systems. 

Agnès Callamard

It is currently unclear whether the system used by CNAF qualifies as a social scoring system due to a lack of clarity in the AI Act on what constitutes such a system.

“It is unfortunate that EU lawmakers have been vague in explicitly defining social scoring within the AI Act. The European Commission must ensure that its upcoming guidelines provide a clear and enforceable interpretation of the social scoring ban, especially as it applies to discriminatory fraud detection and risk-scoring systems,” said Agnès Callamard.

Regardless of its classification under the AI Act, all evidence suggests that the system used by CNAF is discriminatory. It is essential that authorities stop employing it and scrutinize biased practices that are inherently harmful especially to marginalized communities seeking social benefits.

Background

The European Commission will issue guidance on how to interpret the prohibitions in the AI Act prior to their entry into force on 2 February 2025, including what would qualify as social scoring systems.

In August 2024, the AI Act came into force. Amnesty International, as part of a civil society coalition led by the European Digital Rights Network (EDRi), has been calling for EU artificial intelligence regulation that protects and promotes human rights.

In March 2024, an Amnesty International briefing outlined how digital technologies including artificial intelligence, automation, and algorithmic decision-making are exacerbating inequalities in social protection systems across the world

In 2021, Amnesty International’s report Xenophobic Machines exposed how racial profiling was baked into the design of the algorithmic system by the Dutch tax authorities that flagged claims for childcare benefits as potentially fraudulent. 


Tags: France, Human Rights, Freedom of expression.

Share