When Is It Safe to Introduce an AI System Into Healthcare? A Practical Decision Algorithm for the Ethical Implementation of Black-Box AI in Medicine

There is mounting global interest in the revolutionary potential of AI tools. However, its use in healthcare carries certain risks. Some argue that opaque (‘black box’) AI systems in particular undermine patients' informed consent. While interpretable models offer an alternative, this approach...

Ausführliche Beschreibung

Gespeichert in:  
Bibliographische Detailangaben
Nebentitel:IAB 17th World Congress
VerfasserInnen: Allen, Jemima Winifred (Verfasst von) ; Wilkinson, Dominic (Verfasst von) ; Savulescu, Julian (Verfasst von)
Körperschaft: International Association of Bioethics. GeistigeR SchöpferIn (Geistige / künstlerische Inhalte)
Medienart: Elektronisch Aufsatz
Sprache:Englisch
Verfügbarkeit prüfen: HBZ Gateway
Fernleihe:Fernleihe für die Fachinformationsdienste
Veröffentlicht: 2026
In: Bioethics
Jahr: 2026, Band: 40, Heft: 1, Seiten: 61-72
RelBib Classification:NCH Medizinische Ethik
NCJ Wissenschaftsethik
ZG Medienwissenschaft; Digitalität; Kommunikationswissenschaft
weitere Schlagwörter:B black-box AI
B Informed Consent
B Risk assessment
B Artificial Intelligence
B Clinical Practice
B large language models
Online-Zugang: Volltext (kostenfrei)
Volltext (kostenfrei)
Beschreibung
Zusammenfassung:There is mounting global interest in the revolutionary potential of AI tools. However, its use in healthcare carries certain risks. Some argue that opaque (‘black box’) AI systems in particular undermine patients' informed consent. While interpretable models offer an alternative, this approach may be impossible with generative AI and large language models (LLMs). Thus, we propose that AI tools should be evaluated for clinical use based on their implementation risk, rather than interpretability. We introduce a practical decision algorithm for the clinical implementation of black-box AI by evaluating its risk of implementation. Applied to the case of an LLM for surgical informed consent, we assess a system's implementation risk by evaluating: (1) technical robustness, (2) implementation feasibility and (3) analysis of harms and benefits. Accordingly, the system is categorised as minimal-risk (standard use), moderate-risk (innovative use) or high-risk (experimental use). Recommendations for implementation are proportional to risk, requiring more oversight for higher-risk categories. The algorithm also considers the system's cost-effectiveness and patients' informed consent.
ISSN:1467-8519
Enthält:Enthalten in: Bioethics
Persistent identifiers:DOI: 10.1111/bioe.70032