idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instanz:
Teilen: 
23.10.2025 15:45

Strength of gender biases in AI images varies across languages

Julia Rinner Corporate Communications Center
Technische Universität München

    Researchers at the Technical University of Munich (TUM) and TU Darmstadt have studied how text-to-image generators deal with gender stereotypes in various languages. The results show that the models not only reflect gender biases, but also amplify them. The direction and strength of the distortion depends on the language in question.

    In social media, web searches and on posters: AI-generated images can now be found everywhere. Large language models (LLMs) such as ChatGPT are capable of converting simple input into deceptively realistic images. Researchers have now demonstrated that the generation of such artificial images not only reproduces gender biases, but actually magnifies them.

    Models in different languages investigated

    The study explored models across nine languages and compared the results. Previous studies had generally focused only on English-language models. As a benchmark, the team developed the Multilingual Assessment of Gender Bias in Image Generation (MAGBIG). It is based on carefully controlled occupational designations. The study investigated four different types of prompts: direct prompts that use the ‘generic masculine’ in languages in which the generic term for an occupation is grammatically masculine (‘doctor‘), indirect descriptions (‘a person working as a doctor‘), explicitly feminine prompts (‘female doctor‘) and ‘gender star’ prompts (the German convention intended to create a gender-neutral designation by using an asterisk, e.g. ‘Ärzt*innen’ for doctors).

    To make the results comparable, the researchers included languages in which the names of occupations are gendered, such as German, Spanish and French. In addition, the model incorporated languages such as English and Japanese that use only one grammatical gender but have gendered pronouns (‘her’, ‘his’). And finally, it included languages without grammatical gender: Korean and Chinese.

    AI images perpetuate and magnify role stereotypes

    The results of the study show that direct prompts with the generic masculine show the strongest biases. For example, such occupations as ‘accountant’ produce mostly images of white males, while prompts referring to caregiving professions tend to generate female-presenting images. Gender-neutral or ‘gender-star’ forms only slightly mitigated these stereotypes, while images resulting from explicitly feminine prompts showed almost exclusively women. Along with the gender distribution, the researchers also analyzed how well the models understood and executed the various prompts. While neutral formulations were seen to reduce gender stereotypes, they also led to a lower quality of matches between the text input and the generated image.

    “Our results clearly show that the language structures have a considerable influence on the balance and bias of AI image generators,” says Alexander Fraser, Professor for Data Analytics & Statistics at TUM Campus in Heilbronn. “Anyone using AI systems should be aware that different wordings may result in entirely different images and may therefore magnify or mitigate societal role stereotypes.”

    "AI image generators are not neutral—they illustrate our prejudices in high resolution, and this depends crucially on language. Especially in Europe, where many languages converge, this is a wake-up call: fair AI must be designed with language sensitivity in mind,“ adds Prof. Kristian Kersting, co-director of hessian.AI and co-spokesperson for the ”Reasonable AI" cluster of excellence at TU Darmstadt.

    Remarkably, bias varies across languages without a clear link to grammatical structures. For example, switching from French to Spanish prompts leads to a substantial increase in gender bias, despite both languages distinguishing in the same way between male and female occupational terms.


    Wissenschaftliche Ansprechpartner:

    Prof. Alexander Fraser
    Technical University of Munich
    Professorship for Data Analytics & Statistics (DSS)
    alexander.fraser@tum.de


    Originalpublikation:

    Felix Friedrich, Katharina Hämmerl, Patrick Schramowski, Manuel Brack, Jindřich Libovický, Kristian Kersting, and Alexander Fraser. Multilingual Text-to-Image Generation Magnifies Gender Stereotypes. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (2025). DOI: 10.18653/v1/2025.acl-long.966


    Weitere Informationen:

    https://www.tum.de/en/news-and-events/all-news/press-releases/details/strength-o...


    Bilder

    Merkmale dieser Pressemitteilung:
    Journalisten
    Gesellschaft, Informationstechnik
    überregional
    Forschungsergebnisse, Wissenschaftliche Publikationen
    Englisch


     

    Hilfe

    Die Suche / Erweiterte Suche im idw-Archiv
    Verknüpfungen

    Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.

    Klammern

    Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).

    Wortgruppen

    Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.

    Auswahlkriterien

    Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).

    Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).