idw - Informationsdienst
Wissenschaft
AI systems are increasingly shaping public opinion, often in very subtle ways. A new study reveals that current legislation, such as the EU AI Act, is ill-equipped to handle this shift. The findings, authored by researchers from the Weizenbaum Institute, were recently published in the journal Communications of the ACM.
Large Language Models (LLMs) powering AI applications are increasingly serving as information "gatekeepers." Stefan Schmid, Principal Investigator at the Weizenbaum Institute, and Adrian Kuenzler of the University of Hong Kong (formerly a Fellow at the Weizenbaum Institute), investigated how these models transmit bias, the societal risks involved, and where regulatory frameworks must be strengthened.
AI as an Opinion Leader
Language models are the backbone of countless digital applications—ranging from chatbots and virtual assistants to complex decision-making systems in the workplace. The study demonstrates that these systems carry multiple biases. Their outputs are based on patterns within training data preferencing specific worldviews and values. Furthermore, AI systems are often configured to reinforce a user’s existing biases or filter out certain types of content.
"The potential for Large Language Models to subtly influence political opinions and voter behavior poses a serious threat to public discourse and our democracy," explains Stefan Schmid. He notes that this influence is often subliminal, making it difficult for users to recognize they are being swayed.
Legislative Gaps: The AI Act and DSA Under Scrutiny
The study provides a critical analysis of current European legislation, specifically the Digital Services Act (DSA) and the AI Act. The authors conclude that these laws only address communication bias in AI as a byproduct of broader safety and content moderation measures. The focus remains on preventing "obvious" harm, while the subtle distortion of public discourse and democratic processes through bias in LLMs is largely neglected. Additionally, the market dominance of a handful of AI companies creates a further risk, potentially narrowing the diversity of perspectives in the digital sphere.
The Call for a Comprehensive Regulatory Approach
To effectively protect against discrimination and polarization, Schmid and Kuenzler propose broadening the regulatory scope. They argue that combining content moderation, competition, value chain regulation, and technical design governance is crucial in fostering diverse and transparent AI systems that mitigate bias while promoting a balanced digital information ecosystem.
Study: Communication Bias in Large Language Models: A Regulatory Perspective https://dl.acm.org/doi/10.1145/3769689
Communication Bias in Large Language Models: A Regulatory Perspective https://dl.acm.org/doi/10.1145/3769689
Merkmale dieser Pressemitteilung:
Journalisten
Informationstechnik, Medien- und Kommunikationswissenschaften, Politik, Recht
überregional
Forschungsergebnisse
Englisch

Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.
Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).
Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.
Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).
Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).