The use of artificial intelligence (AI) in public administration is increasing worldwide—including in the allocation of social services such as unemployment benefits, housing benefits, and social welfare. However, an international research team from the Max Planck Institute for Human Development and the Toulouse School of Economics has shown that it is precisely those who depend on such benefits who are most skeptical about automated decisions. To gain trust and acceptance for AI-supported systems, the perspectives of those affected must be considered.
A few years ago, the city of Amsterdam piloted an AI program called Smart Check, designed to identify potential cases of welfare fraud. Instead of reviewing applications randomly, the system sifted through numerous data points from municipal records—such as addresses, family composition, income, assets, and prior welfare claims—to assign a “risk score.” Applications deemed “high-risk” were labeled as research-worthy and forwarded to the administrative staff for additional scrutiny. In practice, however, this process disproportionately flagged vulnerable groups, including immigrants, women, and parents, often without offering applicants a clear reason or an effective route to contest the suspicion. Mounting criticism from advocacy groups, legal scholars, and researchers led the city to suspend the program earlier this year, and a recent evaluation confirmed the system’s significant shortcomings.
This case highlights a central dilemma in the use of AI in welfare administration: while such systems promise greater efficiency and faster decisions, they also risk reinforcing biases, eroding trust, and disproportionately burdening vulnerable groups. Against this backdrop, researchers have begun to investigate how those directly affected perceive the increasing role of AI in the distribution of social benefits.
In a study published in Nature Communications, researchers at the Max Planck Institute for Human Development and the Toulouse School of Economics conducted three large-scale surveys with over 3,200 participants in the US and the UK to find out how people feel about the use of AI in the allocation of social benefits. The surveys focused on a realistic dilemma: Would people be willing to accept faster decisions made by a machine, even if this meant an increase in the rate of unjustified rejections? The key finding was that while many citizens are willing to accept minor losses in accuracy in favor of shorter waiting times, social benefit recipients have significantly greater reservations about AI-supported decisions.
“There is a dangerous assumption in policy-making that the average opinion represents the reality of all stakeholders,” explains lead author Mengchen Dong, a research scientist at the Center for Humans and Machines at the Max Planck Institute for Human Development who deals with ethical issues surrounding the use of AI. In fact, the study reveals a clear divide: social welfare recipients reject AI-supported decisions significantly more often than non-recipients — even if the systems promise faster processing.
Another problem is that non-recipients systematically overestimate how willing to trust AI those affected would be . This is true even when they are financially rewarded for realistically assessing the other group’s perspective. Vulnerable groups therefore understand the majority society's point of view better than their own is understood.
Methodology: Simulated decision dilemmas and perspective shifts
The researchers presented the participants with realistic decision-making scenarios: They could choose between processing by human administrators with a longer waiting time (e.g., eight weeks) or a faster decision by AI — combined with a 5 to 30 percent higher risk of incorrect rejections.
Participants were asked to decide which option they would prefer – either from their own perspective or as part of a targeted change of perspective in which they were asked to put themselves in the shoes of the other group (benefit recipients or non-recipients).
While the US sample was representative of the population (around 20 percent of respondents were currently receiving social benefits), the British study specifically aimed for a 50/50 ratio between recipients of Universal Credit — a social benefit for low-income households — and non-recipients. This allowed differences between the groups to be systematically recorded. Demographic factors such as age, gender, education, income, and political orientation were also taken into account.
What are the benefits of a change of perspective? And does a right to object help?
The British sub-study also tested whether financial incentives could improve the ability to adopt a realistic perspective. Participants received bonus payments if their assessment of the other group was close to their actual opinion. Despite the incentives, systematic misjudgments persisted, especially among those who did not receive benefits. Another attempt to strengthen trust in AI also had only limited success: Some participants were informed that the system offered a hypothetical possibility to appeal AI decisions to human administrators. Although this information slightly increased trust, it did little to change the fundamental assessment of AI use.
Consequences for trust in government and administration
According to the study, the acceptance of AI in the social welfare system is closely linked to trust in government institutions. The more people resent AI in making welfare decisions, the less they trust the governments that use it. This applies to both recipients and non-recipients. In the UK, where the study examined the planned use of AI in the allocation of Universal Credit, many participants said that even if AI’s performance on speed and accuracy were the same, they would prefer human case workers to AI. The mention of a possible appeal process did little to change this.
Call for participatory development of AI systems
The researchers warn against developing AI systems for the allocation of social benefits solely according to the will of the majority or on the basis of aggregated data. “If the perspectives of vulnerable groups are not actively taken into account, there is a risk of wrong decisions with real consequences — such as unjustified benefit withdrawals or false accusations,” says co-author Jean-François Bonnefon, Director of the Social and Behavioral Sciences Department at Toulouse School of Economics.
The team of authors therefore calls for a reorientation of the development of public AI systems: away from purely technical efficiency metrics and toward participatory processes that explicitly include the perspectives of vulnerable groups. Otherwise, there is a risk of undesirable developments that will undermine trust in administration and technology in the long term. Building on this work in the US and UK, an ongoing collaboration will leverage Statistics Denmark’s infrastructure to engage vulnerable populations in Denmark and uncover their unique perspectives on broader public administration decisions.
In brief:
• Large-scale surveys: Surveys with more than 3,200 participants on attitudes toward AI-supported decision-making processes in the allocation of social benefits in the US and the UK.
• Differences between social welfare recipients and non-recipients: Social welfare recipients are more skeptical of AI-supported decisions than non-recipients; Non-recipients systematically overestimate the trust that those affected have in AI, even when they are rewarded for assessing their perspective realistically.
• Trust-building measures: Measures such as a hypothetical right of appeal only slightly increase trust in AI, but do not change the fundamental rejection among those affected.
• Design of AI systems: The study calls for participatory development processes for AI systems that actively incorporate the perspectives of vulnerable groups—otherwise there is a risk of losing trust in government and administration.
Dong, M., Bonnefon, J.-F., & Rahwan, I. (2025). Heterogeneous preferences and asymmetric insights for AI use among welfare claimants and non-claimants. Nature Communications, 16, Article 6973. https://doi.org/10.1038/s41467-025-62440-3
https://www.mpib-berlin.mpg.de/press-releases/ai-welfare Press release on the MPIB website
Algorithms in public administration: Using AI systems to approve social benefits promises greater sp ...
Copyright: MPI for Human Development
Criteria of this press release:
Journalists
Information technology, Politics, Psychology, Social studies
transregional, national
Research results
English
Algorithms in public administration: Using AI systems to approve social benefits promises greater sp ...
Copyright: MPI for Human Development
You can combine search terms with and, or and/or not, e.g. Philo not logy.
You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).
Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.
You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).
If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).