Fears about the use of artificial intelligence (AI) in the workplace vary substantially across occupations and countries, researchers led by the Max Planck Institute for Human Development have found in a representative study. They examined public attitudes in 20 countries toward AI in six occupations, including doctors, judges, and journalists. The findings, published in American Psychologist, can help AI designers and policymakers anticipate how new AI developments will be received in different nations, and address fears in a principled yet culturally sensitive manner.
How would you react to receiving a diagnosis from an AI doctor? Would you trust a courtroom verdict delivered by an AI judge? Would you rely on news stories written entirely by a machine? Would you feel motivated working under an AI manager? These questions are at the heart of a recent study that examines widespread concerns about AI replacing human workers, while also revealing cultural differences in how people view AI's involvement in six key occupations: doctors, judges, managers, caregivers, religious leaders, and journalists.
Over 10,000 participants from 20 countries—including the United States, India, Saudi Arabia, Japan, and China—evaluated these six occupations using eight psychological traits: warmth, sincerity, tolerance, fairness, competence, determination, intelligence, and imagination. They also assessed AI’s potential to replicate these traits and expressed their levels of fear regarding AI taking over these roles. The findings suggest that when AI is introduced into a new job, people instinctively compare the human traits necessary for that job with AI's ability to imitate them. Notably, the level of fear felt by participants seems to be directly linked to the perceived mismatch between these human traits and AI's capabilities.
The researchers revealed substantial differences in fear levels between countries. India, Saudi Arabia, and the United States report the highest average fear levels, particularly regarding AI in roles such as judges and doctors. Conversely, countries like Turkey, Japan, and China display the lowest fear levels, suggesting that cultural factors, such as historical experiences with technology, media narratives, and AI policies, significantly shape attitudes. AI-related fears in Germany are moderate, falling between the higher and lower levels observed. This middle ground highlights a cautious optimism toward integrating AI into society.
The researchers also showed occupation-specific differences in fear. Judges consistently ranked as the most feared AI occupation in nearly all countries, reflecting concerns about fairness, transparency, and moral judgment. Conversely, AI driven journalists were the least feared, likely because people retain autonomy over how they engage with the information provided by journalists, unlike judicial decisions, which offer limited personal discretion. Other roles, such as AI driven doctors and care workers, elicited strong fears in some countries due to concerns about AI’s lack of empathy and emotional understanding.
This aligns with the findings of an earlier study on AI managers, where researchers identified initial indications that people react particularly negatively to AI managers, as compared to AI co-workers or AI tools that assist work. This negative reaction was particularly strong in management areas requiring human abilities, such as empathetic listening or respectful behavior (Dong, Bonnefon, & Rahwan, 2024).
“Adverse effects can follow whenever AI is deployed in a new occupation. An important task is to find a way to minimize adverse effects, maximize positive effects, and reach a state where the balance of effects is ethically acceptable”, says first author Mengchen Dong, research scientist at the Center for Human and Machines at the Max Planck Institute for Human Development. The study identifies a critical link between fear and the mismatch between occupational expectations and AI’s perceived capabilities, offering a framework to guide culturally sensitive AI development.
By understanding what people value in human-centric roles, developers and policymakers can create and communicate about AI technologies in ways that build trust and acceptance. "A one-size-fits-all approach overlooks critical cultural and psychological factors, potentially adding barriers to the adoption of beneficial AI technologies across different societies and cultures," adds co-author Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development.
The study also highlights practical strategies for alleviating fears. For instance, concerns about AI doctors lacking sincerity might be addressed through increased transparency in decision-making and positioning AI as a support tool for human practitioners rather than a replacement. Similarly, fears about AI judges could be mitigated by focusing on fairness-enhancing algorithms and public education campaigns that demystify how AI systems operate.
Dong and her colleagues are continuing this work by exploring how utopian and dystopian visions of AI influence present-day attitudes in different countries. These ongoing efforts aim to deepen the understanding of human-AI interaction and guide the ethical and culturally informed deployment of AI systems worldwide.
In brief:
• The study with over 10,000 participants in 20 countries reveals significant cultural differences in public fears about AI replacing humans in six occupations: doctors, judges, managers, caregivers, religious leaders, and journalists.
• Fear arises when there is a discrepancy between the assumed capabilities of AI and the skills required for the role.
• Results show that countries like India, Saudi Arabia, and the U.S. have higher levels of fear, especially regarding AI in roles like doctors and judges. Countries like Japan, China, and Turkey report lower fear levels, indicating cultural factors influence attitudes.
• The research highlights the importance of designing AI systems that align with public expectations, offering strategies to reduce fears.
Dong, M., Conway, J. R., Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2024). Fears about artificial intelligence across 20 countries and six domains of application. American Psychologist. Advance online publication. https://doi.org/10.1037/amp0001454
Dong, M., Bonnefon, J.-F., & Rahwan, I. (2024). Toward human-centered AI management: Methodological challenges and future directions. Technovation, 131, Article 102953. https://doi.org/10.1016/j.technovation.2024.102953
https://www.mpib-berlin.mpg.de/press-releases/ai-fears Press release on the MPIB website with further graphic
AI at the workplace
AI generated
© MPI for Human Development, 2025, generated with the help of Adobe Express
Merkmale dieser Pressemitteilung:
Journalisten
Gesellschaft, Psychologie
überregional
Forschungsergebnisse
Englisch
Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.
Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).
Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.
Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).
Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).