idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instanz:
Teilen: 
28.05.2025 08:37

AI Meets Game Theory: How Language Models Perform in Human-Like Social Scenarios

Verena Coscia Kommunikation
Helmholtz Zentrum München Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)

    Large language models (LLMs) – the advanced AI behind tools like ChatGPT – are increasingly integrated into daily life, assisting with tasks such as writing emails, answering questions, and even supporting healthcare decisions. But can these models collaborate with others in the same way humans do? Can they understand social situations, make compromises, or establish trust? A new study from researchers at Helmholtz Munich, the Max Planck Institute for Biological Cybernetics, and the University of Tübingen, reveals that while today’s AI is smart, it still has much to learn about social intelligence.

    Playing Games to Understand AI Behavior

    To find out how LLMs behave in social situations, researchers applied behavioral game theory – a method typically used to study how people cooperate, compete, and make decisions. The team had various AI models, including GPT-4, engage in a series of games designed to simulate social interactions and assess key factors such as fairness, trust, and cooperation.

    The researchers discovered that GPT-4 excelled in games demanding logical reasoning – particularly when prioritizing its own interests. However, it struggled with tasks that required teamwork and coordination, often falling short in those areas.

    “In some cases, the AI seemed almost too rational for its own good,” said Dr. Eric Schulz, lead author of the study. “It could spot a threat or a selfish move instantly and respond with retaliation, but it struggled to see the bigger picture of trust, cooperation, and compromise.”

    Teaching AI to Think Socially

    To encourage more socially aware behavior, the researchers implemented a straightforward approach: they prompted the AI to consider the other player’s perspective before making its own decision. This technique, called Social Chain-of-Thought (SCoT), resulted in significant improvements. With SCoT, the AI became more cooperative, more adaptable, and more effective at achieving mutually beneficial outcomes – even when interacting with real human players.

    “Once we nudged the model to reason socially, it started acting in ways that felt much more human,” said Elif Akata, first author of the study. “And interestingly, human participants often couldn’t tell they were playing with an AI.”

    Applications in Health and Patient Care

    The implications of this study reach well beyond game theory. The findings lay the groundwork for developing more human-centered AI systems, particularly in healthcare settings where social cognition is essential. In areas like mental health, chronic disease management, and elderly care, effective support depends not only on accuracy and information delivery but also on the AI’s ability to build trust, interpret social cues, and foster cooperation. By modeling and refining these social dynamics, the study paves the way for more socially intelligent AI, with significant implications for health research and human-AI interaction.

    “An AI that can encourage a patient to stay on their medication, support someone through anxiety, or guide a conversation about difficult choices,” said Elif Akata. “That’s where this kind of research is headed.”


    Wissenschaftliche Ansprechpartner:

    Dr. Eric Schulz, https://hcai-munich.com/eric.html


    Originalpublikation:

    Akata et al., 2025: Playing repeated games with Large Language Models. Nature Human Behaviour. DOI: https://doi.org/10.1038/s41562-025-02172-y


    Bilder

    Merkmale dieser Pressemitteilung:
    Journalisten, Wissenschaftler
    Medizin, Psychologie
    überregional
    Wissenschaftliche Publikationen
    Englisch


     

    Hilfe

    Die Suche / Erweiterte Suche im idw-Archiv
    Verknüpfungen

    Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.

    Klammern

    Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).

    Wortgruppen

    Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.

    Auswahlkriterien

    Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).

    Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).