idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instance:
Share on: 
02/08/2023 13:00

Free speech vs. harmful misinformation: How people resolve dilemmas in online content moderation

Nicole Siller Presse- und Öffentlichkeitsarbeit
Max-Planck-Institut für Bildungsforschung

    Online content moderation is a moral minefield, especially when freedom of expression clashes with preventing harm caused by misinformation. A study by a team of researchers from the Max Planck Institute for Human Development, University of Exeter, Vrije Universiteit Amsterdam, and University of Bristol examined how the public would deal with such moral dilemmas. They found that the majority of respondents would take action to control the spread of misinformation, in particular if it was harmful and shared repeatedly. The results of the study can be used to inform consistent and transparent rules for content moderation that the general public accepts as legitimate.

    The issue of content moderation on social media platforms came into sharp focus in 2021, when major platforms such as Facebook and Twitter suspended the accounts of then U.S. President Donald Trump. Debates continued as platforms confronted dangerous misinformation about the COVID-19 and the vaccines, and after Elon Musk singlehandedly overturned Twitter’s COVID-19 misinformation policy and reinstated previously suspended accounts.

    “So far, social media platforms have been the ones making key decisions on moderating misinformation, which effectively puts them in the position of arbiters of free speech. Moreover, discussions about online content moderation often run hot, but are largely uninformed by empirical evidence,” says lead author of the study Anastasia Kozyreva, Research Scientist at the Max Planck Institute for Human Development. “To deal adequately with conflicts between free speech and harmful misinformation, we need to know how people handle various forms of moral dilemmas when making decisions about content moderation,” adds Ralph Hertwig, Director at the Center for Adaptive Rationality of the Max Planck Institute for Human Development.

    In the conjoint survey experiment, more than 2,500 U.S. respondents indicated whether they would remove social media posts spreading misinformation about democratic elections, vaccinations, the Holocaust, and climate change. They were also asked whether they would take punitive action against the accounts by issuing a warning or a temporary or indefinite suspension. Respondents were shown information about hypothetical accounts, including political leaning and number of followers, as well as the accounts’ posts and the consequences of the misinformation they contained.

    The majority of respondents chose to take some action to prevent the spread of harmful misinformation. On average, 66 percent of respondents said they would delete the offending posts, and 78 percent would take some action against the account (of which 33 percent opted to “issue a warning” and 45 percent chose to indefinitely or temporarily suspend accounts spreading misinformation). Not all misinformation was penalized equally: Climate change denial was acted on the least (58%), whereas Holocaust denial (71%) and election denial (69%) were acted on most often, closely followed by anti-vaccination content (66%).

    “Our results show that so-called free-speech absolutists such as Elon Musk are out of touch with public opinion. People by and large recognize that there should be limits to free speech, namely, when it can cause harm, and that content removal or even deplatforming can be appropriate in extreme circumstances, such as Holocaust denial,” says co-author Stephan Lewandowsky, Chair in Cognitive Psychology at the University of Bristol.

    The study also sheds light on the factors that affect people’s decisions regarding content moderation online. The topic, the severity of the consequences of the misinformation, and whether it was a repeat offense had the strongest impact on decisions to remove posts and suspend accounts. Characteristics of the account itself—the person behind the account, their partisanship, and number of followers—had little to no effect on respondents’ decisions.

    Respondents were not more inclined to remove posts from an account with an opposing political stance, nor were they more likely to suspend accounts that did not match their political preferences. However, Republicans and Democrats tended to take different approaches to resolving the dilemma between protecting free speech and removing potentially harmful misinformation. Democrats preferred to prevent dangerous misinformation across all four scenarios, whereas Republicans preferred to protect free speech, imposing fewer restrictions.

    “We hope our research can inform the design of transparent rules for content moderation of harmful misinformation. People's preferences are not the only benchmark for making important trade-offs on content moderation, but ignoring the fact that there is support for taking action against misinformation and the accounts that publish it risks undermining the public's trust in content moderation policies and regulations,” says co-author Professor Jason Reifler from the University of Exeter. “Effective and meaningful platform regulation requires not only clear and transparent rules for content moderation, but general acceptance of the rules as legitimate constraints on the fundamental right to free expression. This important research goes a long way to informing policy makers about what is and, more importantly, what is not acceptable user-generated content,” adds co-author Professor Mark Leiser from the Vrije Universiteit Amsterdam.


    Original publication:

    Kozyreva, A., Herzog, S. M., Lewandowsky, S., Hertwig, R., Lorenz-Spreen, P., Leiser, M., & Reifler, J. (2023). Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences of the United States of America, 120(7), Article e2210666120. https://doi.org/10.1073/pnas.2210666120


    More information:

    https://www.mpib-berlin.mpg.de/online-content-moderation


    Images

    Criteria of this press release:
    Journalists
    Media and communication sciences, Psychology, Social studies
    transregional, national
    Research results
    English


     

    Help

    Search / advanced search of the idw archives
    Combination of search terms

    You can combine search terms with and, or and/or not, e.g. Philo not logy.

    Brackets

    You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).

    Phrases

    Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.

    Selection criteria

    You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).

    If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).