AI systems are increasingly shaping public opinion, often in very subtle ways. A new study reveals that current legislation, such as the EU AI Act, is ill-equipped to handle this shift. The findings, authored by researchers from the Weizenbaum Institute, were recently published in the journal Communications of the ACM.
Large Language Models (LLMs) powering AI applications are increasingly serving as information "gatekeepers." Stefan Schmid, Principal Investigator at the Weizenbaum Institute, and Adrian Kuenzler of the University of Hong Kong (formerly a Fellow at the Weizenbaum Institute), investigated how these models transmit bias, the societal risks involved, and where regulatory frameworks must be strengthened.
AI as an Opinion Leader
Language models are the backbone of countless digital applications—ranging from chatbots and virtual assistants to complex decision-making systems in the workplace. The study demonstrates that these systems carry multiple biases. Their outputs are based on patterns within training data preferencing specific worldviews and values. Furthermore, AI systems are often configured to reinforce a user’s existing biases or filter out certain types of content.
"The potential for Large Language Models to subtly influence political opinions and voter behavior poses a serious threat to public discourse and our democracy," explains Stefan Schmid. He notes that this influence is often subliminal, making it difficult for users to recognize they are being swayed.
Legislative Gaps: The AI Act and DSA Under Scrutiny
The study provides a critical analysis of current European legislation, specifically the Digital Services Act (DSA) and the AI Act. The authors conclude that these laws only address communication bias in AI as a byproduct of broader safety and content moderation measures. The focus remains on preventing "obvious" harm, while the subtle distortion of public discourse and democratic processes through bias in LLMs is largely neglected. Additionally, the market dominance of a handful of AI companies creates a further risk, potentially narrowing the diversity of perspectives in the digital sphere.
The Call for a Comprehensive Regulatory Approach
To effectively protect against discrimination and polarization, Schmid and Kuenzler propose broadening the regulatory scope. They argue that combining content moderation, competition, value chain regulation, and technical design governance is crucial in fostering diverse and transparent AI systems that mitigate bias while promoting a balanced digital information ecosystem.
Study: Communication Bias in Large Language Models: A Regulatory Perspective https://dl.acm.org/doi/10.1145/3769689
Communication Bias in Large Language Models: A Regulatory Perspective https://dl.acm.org/doi/10.1145/3769689
Criteria of this press release:
Journalists
Information technology, Law, Media and communication sciences, Politics
transregional, national
Research results
English

You can combine search terms with and, or and/or not, e.g. Philo not logy.
You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).
Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.
You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).
If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).