Is it a human being writing here or is artificial intelligence behind it? Anyone who surfs Twitter, Instagram or TikTok has – consciously or not – already encountered social bots: Automated profiles that comment, like or write messages. Social psychologists at the University of Duisburg-Essen (UDE) have conducted several studies to investigate how we deal with them. Do we recognize artificial intelligence? Do we accept it? That depends on the topic – and on the bot's opinion.
Bots in general are quite useful: they remind us to drink or provide weather updates. But social bots are also used to support certain opinions on the web or even to intentionally spread fake news.
In a series of studies, UDE’s social psychologists investigated whether we recognize manipulative social bots, at which point we become suspicious, and in what case we even check whether a profile belongs to a real person: For the three surveys conducted online, a total of about 600 participants from the U.S. looked at various Twitter profiles. These profiles were either heavily favoring the Republicans or Democrats, and furthermore, had varying degrees of individual or automated design.
"Due to the political system in the U.S., we could assume that participants would clearly favor either one party or the other", explains PhD student and one of the study leaders Magdalena Wischnewski. "There are few other aspects of life with such a clear dichotomy."
It appeared that the participants became suspicious if the profiles had a very automated character. However, this was more likely to be tolerated by the test persons if the respective account displayed the same political opinion as they did. The opposite was true for contrary positions: Those who support the Democratic U.S. president tended to be suspicious of Republican content – and vice versa. The participants mostly used software to identify bots in order to confirm an opinion they had already formed.
"The results apply both geographically to other countries and in terms of content to all aspects of life, where there are different positions", Wischnewski says. "For us to interact with social bots, they have to be human-like and they must share our opinions. But there are very few of those. It is therefore unlikely that many users will fall for bots in direct contact. In other words, social bots rather serve to pick up topics so often that it seems as if they are trending."
Editor: Birte Vierjahn, +49 203/37 9-2427, birte.vierjahn@uni-due.de
Magdalena Wischnewski, Social Psychology, +49 203/37 9-2442, magdalena.wischnewski@uni-due.de
„Disagree? You Must be a Bot! How Beliefs Shape Twitter Profile Perceptions”
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems,
https://doi.org/10.1145/3411764.3445109
https://osf.io/36mkw/ Setup (Open Access)
Merkmale dieser Pressemitteilung:
Journalisten, Studierende, Wissenschaftler, jedermann
Informationstechnik, Medien- und Kommunikationswissenschaften, Philosophie / Ethik, Psychologie
überregional
Forschungsergebnisse, Wissenschaftliche Publikationen
Englisch
Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.
Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).
Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.
Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).
Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).