idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instanz:
Teilen: 
08.03.2022 15:29

Trusting in AI in medicine

Dipl.-Journ. Constantin Schulte Strathaus Presse- und Öffentlichkeitsarbeit
Katholische Universität Eichstätt-Ingolstadt

    The use of artificial intelligence in medicine offers new ways for making more precise diagnoses and relieving doctors from routine tasks. How well do doctors really have to understand this technology to develop the "right” measure of trust in such systems? And does the use of AI lead to any ethically relevant changes in the doctor-patient relationship? It is answers to these and similar questions that a project headed by the THI Ingolstadt and the Catholic University Eichstätt-Ingolstadt (KU) will be working on.

    Cooperative partners in this project are Prof. Dr. Matthias Uhl, who holds the Professorship for Social Implications and Ethical Aspects of AI and Prof. Dr.-Ing. Marc Aubreville, Professor of Image Understanding and Medical Application of AI at the THI as well as Prof. Dr. Alexis Fritz, holder of the Chair of Moral Theology at the KU. The project “Responsibility Gaps in Human-Machine Interactions: The Ambivalence of Trust in AI” is being funded by the bidt, the Bavarian Research Institute for Digital Transformation.

    Monotonous tasks are time-consuming and tiring to humans. Having experienced doctors assess dozens of mammograms can have the unwanted side-effect that small but diagnostically relevant details are overlooked. Putting AI to good use in this field has the potential to relieve humans of this burden and free up their capacities for decision-making. “This is based on the assumption, that the human experts must be able to trust the AI system. This trust, however, can lead to the doctor not critically reassessing the AI decision”, says Prof. Dr. Marc Aubreville. Even systems that are typically used in the medical field are not infallible. That is, after all, why in all procedures humans are meant to be the last authority in the decision-making chain.

    But is that enough to establish a reliable degree of accountability in the interaction
    “Seen from an ethical perspective, merely optimizing AI systems technically, is too narrow an approach. That is why we want to compare the first scenario with a situation, in which the first step is not a recommendation by AI. Instead, a real doctor has made a diagnosis, which only then in a second step is validated by artificial intelligence”, says Professor Fritz. He will explore the normative requirements that help decision-makers remain aware of their agency, thereby making them shift less responsibility to an artificial intelligence. To this end, existing studies on the ethical ramifications of the interaction between doctors and systems based on artificial intelligence are being gathered and analyzed. The different concepts of responsibility and accountability as part of the medical practice will be evaluated in workshops and qualitative interviews with doctors and engineers. Among other things, the relationship between doctor and patient will also play a role. Do doctors, for example, feel that consulting a recommendation system questions their authority in front of patients?

    In general terms, the participants of this project want to provide data for the development of user-aware AI solutions. “Today, a lot of solutions are designed without any consideration of the subsequent decision-making processes. Therefore, based on our findings, we will be able to evaluate how best to present the results and ambiguities of algorithms to the expert in charge. Our aim in this will be to find the right balance of trust in algorithmic recommendations, especially in situations outside the norm, in which an algorithm might not be able to provide the best advice”, says Fritz.of human and machine? “The simplest approach, which aims at introducing a human into the process only when wrong decisions have to be rectified, is too naive”, says Prof. Dr. Alexis Fritz. Just as humans will feel less accountable when they have reached a decision in cooperation with other humans, studies have shown that the same holds true if human decision-makers have been counseled by a system that makes recommendations. Prof. Dr. Matthias Uhl sums up the findings of his own empirical studies as follows: “In different morally relevant contexts of decision-making we have seen that humans keep following AI recommendations, even if we give them good reason to doubt the system’s recommendations, for example because the system is trained using poor-quality data.”


    Bilder

    Merkmale dieser Pressemitteilung:
    Journalisten, Wissenschaftler
    Gesellschaft, Informationstechnik, Medizin, Philosophie / Ethik
    überregional
    Forschungsprojekte
    Englisch


     

    Hilfe

    Die Suche / Erweiterte Suche im idw-Archiv
    Verknüpfungen

    Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.

    Klammern

    Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).

    Wortgruppen

    Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.

    Auswahlkriterien

    Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).

    Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).