idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Grafik: idw-Logo

idw - Informationsdienst
Wissenschaft

Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instance:
Share on: 
06/02/2021 15:02

Mastering Artificial Intelligence

Dr. Karin Röhricht Presse- und Öffentlichkeitsarbeit
Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA

    Study “Explainable AI in practice – application-based evaluation of xAI methods”

    In most cases, artificial intelligence has a black-box character. However, it is only through transparency that trust can be built. Specialized software is available in order to explain different AI models. Now, a study by Fraunhofer IPA has compared and evaluated various methods, which explain machine learning approaches.

    Just a few decades ago, artificial intelligence (AI) was the stuff of science fiction, but has since become part of our daily lives. In manufacturing, it identifies anomalies in the production process. For banks, it makes decisions regarding loans. And on Netflix, it finds just the right film for every user. All of this is made possible by highly complex algorithms that work covertly in the background. The more challenging the problem, the more complex the AI model – and the more inscrutable.

    But users want to be able to understand how a decision has been made, particularly with critical applications: Why was the work piece rejected? What caused the damage to my machine? It is only by understanding the reason for decisions that improvements can be made – and this increasingly applies to safety too. In addition, the EU General Data Protection Regulation stipulates that decisions must be transparent.

    Software comparison for xAI

    In order to solve this problem, an entirely new area of research has arisen: “Explainable Artificial Intelligence”, or xAI for short. There are now numerous digital aids on the market which explain complex AI approaches. For example, in an image, they mark up the pixel which led to parts being rejected. The Stuttgart-based experts at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA have now compared and evaluated nine common explanation techniques – including LIME, SHAP and Layer-Wise Relevance Propagation – by means of exemplary applications. There were three main criteria for this:
    - Stability: The program should always generate the same explanation for the same problem. For example, an explanation should never flag up sensor A and then sensor B if the same anomaly arises in a production machine. This would harm trust in the algorithm and make it difficult to know what course of action to take.
    - Consistency: At the same time, only slightly different input data should also receive similar explanations.
    - Fidelity: Additionally, it is especially important that explanations are representative of how the AI model actually functions. For example, the explanation for the rejection of a bank loan should not be that the customer is too old when the reason was that their income was too low.

    The use case is crucial

    Conclusion of the study: All explanation methods researched were viable. However, as Nina Schaaf, who is responsible for the study at Fraunhofer IPA, explains: “There is no single perfect method.” For example, significant differences emerge in the time needed for generating explanations. The respective objective also largely determines which software is best to use. For example, Layer-Wise Relevance Propagation and Integrated Gradients are particularly good for image data. In summary, Schaaf says: “The target group is also important when it comes to explanations: An AI-developer will want and should receive an explanation phrased differently to the production manager, as both will draw different conclusions from the explanation.”


    Contact for scientific information:

    Nina Schaaf; Tel.: +49 711 970-1971; nina.schaaf@ipa.fraunhofer.de


    Original publication:

    Download of the German and English version: https://www.ki-fortschrittszentrum.de/de/studien/erklaerbare-ki-in-der-praxis.ht...


    More information:

    https://www.ipa.fraunhofer.de/en/about-us/guiding-themes/ai/Dependable_AI.html


    Images

    Study "Explainable AI in practice – application-based evaluation of xAI methods"
    Study "Explainable AI in practice – application-based evaluation of xAI methods"

    Fraunhofer IPA und IAO


    Attachment
    attachment icon Press Release as PDF

    Criteria of this press release:
    Journalists, Scientists and scholars
    Mechanical engineering
    transregional, national
    Research results, Transfer of Science or Research
    English


     

    Help

    Search / advanced search of the idw archives
    Combination of search terms

    You can combine search terms with and, or and/or not, e.g. Philo not logy.

    Brackets

    You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).

    Phrases

    Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.

    Selection criteria

    You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).

    If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).