idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Grafik: idw-Logo

idw - Informationsdienst
Wissenschaft

Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instanz:
Teilen: 
30.03.2022 13:29

How to "detox" potentially offensive language from an AI - Publication in "Nature Machine Intelligence"

Silke Paradowski Stabsstelle Kommunikation und Medien
Technische Universität Darmstadt

    Researchers from the Artificial Intelligence and Machine Learning Lab at the Technical University of Darmstadt demonstrate that artificial intelligence language systems also learn human concepts of "good" and "bad". The results have now been published in the journal "Nature Machine Intelligence".

    Although moral concepts differ from person to person, there are fundamental commonalities. For example, it is considered good to help the elderly. It is not good to steal money from them. We expect a similar kind of "thinking" from an artificial intelligence that is part of our everyday life. For example, a search engine should not add the suggestion "steal from" to our search query "elderly people". However, examples have shown that AI systems can certainly be offensive and discriminatory. Microsoft's chatbot Tay, for example, attracted attention with lewd comments, and texting systems have repeatedly shown discrimination against under-represented groups.
    This is because search engines, automatic translation, chatbots and other AI applications are based on natural language processing (NLP) models. These have made considerable progress in recent years through neural networks. One example is the Bidirectional Encoder Representations (BERT) - a pioneering model from Google. It considers words in relation to all the other words in a sentence, rather than processing them individually one after the other. BERT models can consider the entire context of a word - this is particularly useful for understanding the intent behind search queries. However, developers need to train their models by feeding them data, which is often done using gigantic, publicly available text collections from the internet. And if these texts contain sufficiently discriminatory statements, the trained language models may reflect this.
    Researchers from the fields of AI and cognitive science led by Patrick Schramowski from the Artificial Intelligence and Machine Learning Lab at TU Darmstadt have discovered that concepts of "good" and "bad" are also deeply embedded in these language models. In their search for latent, inner properties of these language models, they found a dimension that seemed to correspond to a gradation from good actions to bad actions. In order to substantiate this scientifically, the researchers at TU Darmstadt first conducted two studies with people - one on site in Darmstadt and an online study with participants worldwide. The researchers wanted to find out which actions participants rated as good or bad behaviour in the deontological sense, more specifically whether they rated a verb more positively (Do's) or negatively (Don'ts). An important question was what role contextual information played. After all, killing time is not the same as killing someone.
    The researchers then tested language models such as BERT to see whether they arrived at similar assessments. "We formulated actions as questions to investigate how strongly the language model argues for or against this action based on the learned linguistic structure", says Schramowski. Example questions were "Should I lie?" or "Should I smile at a murderer?"
    "We found that the moral views inherent in the language model largely coincide with those of the study participants", says Schramowski. This means that a language model contains a moral world view when it is trained with large amounts of text.
    The researchers then developed an approach to make sense of the moral dimension contained in the language model: You can use it not only to evaluate a sentence as a positive or negative action. The latent dimension discovered means that verbs in texts can now also be substituted in such a way that a given sentence becomes less offensive or discriminatory. This can also be done gradually.
    Although this is not the first attempt to detoxify the potentially offensive language of an AI, here the assessment of what is good and bad comes from the model trained with human text itself. The special thing about the Darmstadt approach is that it can be applied to any language model. "We don't need access to the parameters of the model", says Schramowski. This should significantly relax communication between humans and machines in the future.

    About TU Darmstadt
    TU Darmstadt is one of the leading technical universities in Germany and stands for excellent and relevant science. Global transformations – from the energy transition to Industry 4.0 and artificial intelligence – are being decisively shaped by TU Darmstadt through outstanding findings and forward-looking study programmes.
    TU Darmstadt concentrates its top-level research in three fields: energy and environment, information and intelligence, matter and materials. Its problem-centred interdisciplinarity and productive exchange with society, business and politics generate progress for sustainable development worldwide.
    Since its foundation in 1877, TU Darmstadt has been one of the most internationally oriented universities in Germany. As a European technical university, it is building a trans-European campus in the Unite! alliance. Together with its partners at the Rhine-Main universities – Goethe University Frankfurt and Johannes Gutenberg University Mainz – it continues to develop the Frankfurt-Rhine-Main metropolitan region as a globally attractive science area. www.tu-darmstadt.de


    Wissenschaftliche Ansprechpartner:

    Patrick Schramowski
    Artificial Intelligence and Machine Learning Group
    Fachbereich Informatik
    schramowski@cs.tu-darmstadt.de
    Tel.: +49 6151 1624413


    Originalpublikation:

    Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin Rothkopf, Kristian Kersting (2022): „Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to Do”, in Nature Machine Intelligence 4, 258–268 (2022)
    https://doi.org/10.1038/s42256-022-00458-8


    Bilder

    Merkmale dieser Pressemitteilung:
    Journalisten, Wissenschaftler
    Informationstechnik, Psychologie, Sprache / Literatur
    überregional
    Forschungsergebnisse, Wissenschaftliche Publikationen
    Englisch


     

    Hilfe

    Die Suche / Erweiterte Suche im idw-Archiv
    Verknüpfungen

    Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.

    Klammern

    Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).

    Wortgruppen

    Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.

    Auswahlkriterien

    Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).

    Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).