idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Grafik: idw-Logo

idw - Informationsdienst
Wissenschaft

Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instanz:
Teilen: 
01.12.2017 11:26

Artificial Intelligence to the rescue in finishing off Fake News

Linda Behringer Public Relations
Max-Planck-Institut für Intelligente Systeme

    An algorithm jointly developed by researchers of the Max-Planck Institute for Intelligent Systems and for Software Systems in Kaiserslautern promises to optimize which doubtful news stories to send for fact checking and when to do so, helping to prevent fake news from spreading on Social Media.

    Stuttgart – “We optimize when and what story to send for fact checking, so less people get exposed to fake news without knowing that they may be fake”, Manuel Gomez Rodriguez explains. He is a researcher at the Max Planck Institute for Software Systems in Kaiserslautern, and an alumnus of the Max Planck Institute for Intelligent Systems where he did his PhD in the Empirical Inference Department. Gomez is co-author of a scientific publication with the title “Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation”. The timely topic was accepted for the 11th ACM International Conference on Web Search and Data Mining (WSDM) 2018 in Los Angeles.

    When judging what news story is a hoax, Gomez and his colleagues Jooyeon Kim, Behzad Tabibian, Alice Oh and Bernhard Schölkopf, Director of the Empirical Inference Department at the Max Planck Institute for Intelligent Systems in Tübingen, resort to the crowd to make a judgment - just like Facebook, Twitter or Weibo do. Whenever for instance Facebook users are exposed to a story through their feeds, according to the company´s website, users have a choice to flag the story as misinformation and, if the story receives enough flags (information on the total amount is not provided by Facebook), it is directed to a coalition of independent organizations, third party fact checking teams that are signatories of the Poynter’s International Fact Checking Code of Principles. If the fact checking organizations identify a story as fake, Facebook flags the story as disputed and there will be a link to the corresponding article explaining why. Stories that have been disputed may also appear lower in the news feed.

    Urban legends fuel the emergence of a post-truth society

    For Manuel Gomez and his colleagues, this approach isn´t enough. Not only the threshold of enough flags should determine when the fact checking teams begin their work. If a story causes no ripples, why waste valuable fact checking resources, the researchers argue. For them the likelihood of the story going viral should also determine if and when fact checkers come in. After all, once a fake story goes haywire, the damage is way greater than if it hadn´t spread on the internet. So the higher the likelihood of a flagged story spreading quickly – even if it was just given a small amount of flags in the first place – that´s when fact checking should kick in. The absolute number of flags alone isn´t the trigger nudging the fact checking, but also the likelihood of a story going viral.

    “If our algorithm estimates that something will go viral, we take the risk of fact checking something earlier rather than later, even if we have only one or a few flags – just to make sure the story is indeed true before it reaches a larger audience”, Gomez argues. “If something isn´t as viral, you can wait for more people to flag up a story and then let the fact checkers lose. It´s a matter of resources: not every story can be investigated. We really have to pick out the ones that pose the highest risk of spreading.”

    When it comes to predicting which story becomes viral and which one no one´s interested in, the researchers rely on historic data to have a first prediction of the rate at which people will retweet a story. The researchers look at millions of users and how their followers on average retweeted their stories. This historical data flows into the equation, plus how a story is perceived the moment it is posted. If there is an immediate high response, the virality alarm bells of the algorithm they call CURB (since the algorithm helps to curb the spread of misinformation) sound louder than if the story gets shared less ambitiously in let´s say the first hour of it being posted. If there are many retweets right away, the likelihood of the story going viral increases, which the algorithm absorbs. This makes the algorithm adaptive - hence (artificially) intelligent.

    Gomez hopes CURB will one day be used by social networks to determine what story to fact check at what time, and additionally serve as a basis for more sophisticated algorithms by other researchers in the future. “To facilitate that, together with the paper, we will release an open source implementation of our algorithm, several datasets, and a supporting website (http://learning.mpi-sws.org/curb).”

    However, CURB has its limitations, the researchers openly admit. “The algorithm doesn´t solve what happens next once a story is fact checked and labelled fake. If you would let it spread on the internet, that would mean many people are exposed to the hoax. If you take it off peoples´ news feed, one could argue you are censoring the web”, Gomez explains. The existence of fake news and how to deal with them – a much discussed topic with the question being asked whether the warning, the particular story is disputed, is enough to make people realize a story is fake. In the end, it is up to the users and their human intelligence to make a judgement. Artificial intelligence cannot help in that.


    Weitere Informationen:

    http://learning.mpi-sws.org/curb/


    Bilder

    Manuel Gomez Rodriguez, researcher at the MPI-SWS
    Manuel Gomez Rodriguez, researcher at the MPI-SWS
    MPI-SWS/Oliver Dietze
    None


    Merkmale dieser Pressemitteilung:
    Journalisten, Wissenschaftler
    Gesellschaft, Informationstechnik, Medien- und Kommunikationswissenschaften, Wirtschaft
    überregional
    Buntes aus der Wissenschaft, Wissenschaftliche Publikationen
    Englisch


     

    Hilfe

    Die Suche / Erweiterte Suche im idw-Archiv
    Verknüpfungen

    Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.

    Klammern

    Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).

    Wortgruppen

    Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.

    Auswahlkriterien

    Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).

    Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).