idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instanz:
Teilen: 
02.06.2021 15:02

Mastering Artificial Intelligence

Dr. Karin Röhricht Presse- und Öffentlichkeitsarbeit
Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA

    Study “Explainable AI in practice – application-based evaluation of xAI methods”

    In most cases, artificial intelligence has a black-box character. However, it is only through transparency that trust can be built. Specialized software is available in order to explain different AI models. Now, a study by Fraunhofer IPA has compared and evaluated various methods, which explain machine learning approaches.

    Just a few decades ago, artificial intelligence (AI) was the stuff of science fiction, but has since become part of our daily lives. In manufacturing, it identifies anomalies in the production process. For banks, it makes decisions regarding loans. And on Netflix, it finds just the right film for every user. All of this is made possible by highly complex algorithms that work covertly in the background. The more challenging the problem, the more complex the AI model – and the more inscrutable.

    But users want to be able to understand how a decision has been made, particularly with critical applications: Why was the work piece rejected? What caused the damage to my machine? It is only by understanding the reason for decisions that improvements can be made – and this increasingly applies to safety too. In addition, the EU General Data Protection Regulation stipulates that decisions must be transparent.

    Software comparison for xAI

    In order to solve this problem, an entirely new area of research has arisen: “Explainable Artificial Intelligence”, or xAI for short. There are now numerous digital aids on the market which explain complex AI approaches. For example, in an image, they mark up the pixel which led to parts being rejected. The Stuttgart-based experts at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA have now compared and evaluated nine common explanation techniques – including LIME, SHAP and Layer-Wise Relevance Propagation – by means of exemplary applications. There were three main criteria for this:
    - Stability: The program should always generate the same explanation for the same problem. For example, an explanation should never flag up sensor A and then sensor B if the same anomaly arises in a production machine. This would harm trust in the algorithm and make it difficult to know what course of action to take.
    - Consistency: At the same time, only slightly different input data should also receive similar explanations.
    - Fidelity: Additionally, it is especially important that explanations are representative of how the AI model actually functions. For example, the explanation for the rejection of a bank loan should not be that the customer is too old when the reason was that their income was too low.

    The use case is crucial

    Conclusion of the study: All explanation methods researched were viable. However, as Nina Schaaf, who is responsible for the study at Fraunhofer IPA, explains: “There is no single perfect method.” For example, significant differences emerge in the time needed for generating explanations. The respective objective also largely determines which software is best to use. For example, Layer-Wise Relevance Propagation and Integrated Gradients are particularly good for image data. In summary, Schaaf says: “The target group is also important when it comes to explanations: An AI-developer will want and should receive an explanation phrased differently to the production manager, as both will draw different conclusions from the explanation.”


    Wissenschaftliche Ansprechpartner:

    Nina Schaaf; Tel.: +49 711 970-1971; nina.schaaf@ipa.fraunhofer.de


    Originalpublikation:

    Download of the German and English version: https://www.ki-fortschrittszentrum.de/de/studien/erklaerbare-ki-in-der-praxis.ht...


    Weitere Informationen:

    https://www.ipa.fraunhofer.de/en/about-us/guiding-themes/ai/Dependable_AI.html


    Bilder

    Study "Explainable AI in practice – application-based evaluation of xAI methods"
    Study "Explainable AI in practice – application-based evaluation of xAI methods"

    Fraunhofer IPA und IAO


    Anhang
    attachment icon Press Release as PDF

    Merkmale dieser Pressemitteilung:
    Journalisten, Wissenschaftler
    Maschinenbau
    überregional
    Forschungs- / Wissenstransfer, Forschungsergebnisse
    Englisch


     

    Study "Explainable AI in practice – application-based evaluation of xAI methods"


    Zum Download

    x

    Hilfe

    Die Suche / Erweiterte Suche im idw-Archiv
    Verknüpfungen

    Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.

    Klammern

    Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).

    Wortgruppen

    Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.

    Auswahlkriterien

    Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).

    Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).