idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instance:
Share on: 
06/30/2023 18:00

Dangerous chatbots: Prof. Stephen Gilbert calls for AI chatbots to be approved as medical devices

Anja Stübner Pressestelle
Technische Universität Dresden

    LLM-based generative chat tools, such as ChatGPT or Google’s MedPaLM have great medical potential, but there are inherent risks associated with their unregulated use in healthcare. The new Nature Medicine paper by Prof. Stephen Gilbert, et. al. addresses one of the most pressing international issues of our time: How to regulate Large Language Models (LLMs) in general and specifically in health.

    “Large Language Models are neural network language models with remarkable conversational skills. They generate human-like responses and engage in interactive conversations. However, they often generate highly convincing statements that are verifiably wrong or provide inappropriate responses. Today there is no way to be certain about the quality, evidence level, or consistency of clinical information or supporting evidence for any response. These chatbots are unsafe tools when it comes to medical advice and it is necessary to develop new frameworks that ensure patient safety”, said Prof. Stephen Gilbert, Professor for Medical Device Regulatory Science at Else Kröner Fresenius Center for Digital Health at TU Dresden.

    Challenges in the regulatory approval of large language models

    Most people research their symptoms online before seeking medical advice. Search engines play a role in decision-making process. The forthcoming integration of LLM-chatbots into search engines may increase users’ confidence in the answers given by a chatbot that mimics conversation. It has been demonstrated that LLMs can provide profoundly dangerous information when prompted with medical questions. LLM’s underlying approach has no model of medical “ground truth”, which is dangerous. Chat interfaced LLMs have already provided harmful medical responses and have already been used unethically in ‘experiments’ on patients without consent. Almost every medical LLM use case requires regulatory control in the EU and US. In the US their lack of explainability disqualifies them from being ‘non devices’. LLMs with explainability, low bias, predictability, correctness, and verifiable outputs do not currently exist and they are not exempted from current (or future) governance approaches. In this paper the authors describe the limited scenarios in which LLMs could find application under current frameworks, they describe how developers can seek to create LLM-based tools that could be approved as medical devices, and they explore the development of new frameworks that preserve patient safety. “Current LLM-chatbots do not meet key principles for AI in healthcare, like bias control, explainability, systems of oversight, validation and transparency. To earn their place in medical armamentarium, chatbots must be designed for better accuracy, with safety and clinical efficacy demonstrated and approved by regulators,” concludes Prof. Gilbert.

    Contact:
    Anja Stübner
    Tel.: +49 351 458-11379
    Email: Anja.Stuebner@ukdd.de


    Original publication:

    Published in Nature Medicine: http://dx.doi.org/10.1038/s41591-023-02412-6
    “Large language model AI chatbots require approval as medical devices” - Stephen Gilbert, Hugh Harvey, Tom Melvin, Erik Vollebregt, Paul Wicks
    DOI: 10.1038/s41591-023-02412-6
    Prof. Stephen Gilbert
    Professor for Medical Device Regulatory Science
    Else Kröner Fresenius Center for Digital Health at TU Dresden
    Publication as part of the research project „PATH – Personal Mastery of Health and Wellness Data“ – BMBF-funded and EU NextGenerationEU programm under grant number 16KISA100K


    Images

    Prof. Stephen Gilbert
    Prof. Stephen Gilbert

    EKFZ/ A. Stübner


    Criteria of this press release:
    Journalists
    Information technology, Medicine
    transregional, national
    Research projects, Scientific Publications
    English


     

    Prof. Stephen Gilbert


    For download

    x

    Help

    Search / advanced search of the idw archives
    Combination of search terms

    You can combine search terms with and, or and/or not, e.g. Philo not logy.

    Brackets

    You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).

    Phrases

    Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.

    Selection criteria

    You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).

    If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).