The medical use of artificial intelligence (AI) threatens to undermine patients' ability to make personalised decisions. A new research by Dr Christian Günther, scientist at the Max Planck Institute for Social Law and Social Policy, uses case studies from the UK and California to analyse whether and how the law can counter this threat to patient autonomy.
The legal scholar comes to the conclusion that the law has a proactive dynamic that allows it to react very well to innovations – and even better than extra-legal regulatory approaches. ‘Contrary to widespread assumptions, the law is not an obstacle that only hinders the development and use of innovative technology. On the contrary: it actively shapes this development and plays a central role in the governance of new technologies,’ explains Günther.
A multitude of clinical AI systems are currently being approved for use in healthcare systems worldwide. AI is defined as a technology capable of accomplishing the kinds of tasks that human experts have previously solved through their knowledge, skills and intuition. In particular, the machine learning approach has been a key driver in the development of clinical AI with such capabilities. However, despite all the advantages associated with it, AI systems can pose a potential threat to the legally required informed consent of patients. This obligation requires the disclosure of information by the medical professional in order to redress the imbalance of expertise between the two sides.
In his research, Christian Günther identifies four specific problems that can occur in this context:
1. The use of clinical AI creates a degree of uncertainty based on the nature of AI-generated knowledge and the difficulties in scientifically verifying that knowledge.
2. Some ethically significant decisions may be made relatively independently, i.e. without meaningful patient involvement.
3. Patients' ability to make rational decisions in the medical decision-making process can be significantly undermined.
4. Patients may not be able to respond appropriately to non-obvious substitutions of human expertise by AI.
To address these issues, Günther examined the norms underlying the principle of informed consent in the UK and California and, using a specific regulatory proposal, demonstrates how legal regulations can be developed in a targeted manner to both promote technological progress and protect patient rights.
Dr. Christian Günther
Wissenschaftlicher Mitarbeiter
Dr. Julia Hagn
Science Communication
Tel.: 089/38602428
Email: presse@mpisoc.mpg.de
Günther, Christian: Artificial Intelligence, Patient Autonomy and Informed Consent, Nomos: Baden-Baden 2024.
Open Access: https://www.nomos-elibrary.de/10.5771/9783748948919/artificial-intelligence-pati...
Merkmale dieser Pressemitteilung:
Journalisten, Studierende, Wissenschaftler
Medizin, Recht
überregional
Forschungsergebnisse
Englisch
Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.
Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).
Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.
Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).
Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).