When Is It Safe to Introduce an AI System Into Healthcare? A Practical Decision Algorithm for the Ethical Implementation of Black-Box AI in Medicine

There is mounting global interest in the revolutionary potential of AI tools. However, its use in healthcare carries certain risks. Some argue that opaque (‘black box’) AI systems in particular undermine patients' informed consent. While interpretable models offer an alternative, this approach...

Description complète

Enregistré dans:  
Détails bibliographiques
Autres titres:IAB 17th World Congress
Auteurs: Allen, Jemima Winifred (Auteur) ; Wilkinson, Dominic (Auteur) ; Savulescu, Julian (Auteur)
Collectivité auteur: International Association of Bioethics. GeistigeR SchöpferIn (Créateur)
Type de support: Électronique Article
Langue:Anglais
Vérifier la disponibilité: HBZ Gateway
Interlibrary Loan:Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany)
Publié: 2026
Dans: Bioethics
Année: 2026, Volume: 40, Numéro: 1, Pages: 61-72
RelBib Classification:NCH Éthique médicale
NCJ Science et éthique
ZG Sociologie des médias; médias numériques; Sciences de l'information et de la communication
Sujets non-standardisés:B black-box AI
B Informed Consent
B Risk assessment
B Artificial Intelligence
B Clinical Practice
B large language models
Accès en ligne: Volltext (kostenfrei)
Volltext (kostenfrei)
Description
Résumé:There is mounting global interest in the revolutionary potential of AI tools. However, its use in healthcare carries certain risks. Some argue that opaque (‘black box’) AI systems in particular undermine patients' informed consent. While interpretable models offer an alternative, this approach may be impossible with generative AI and large language models (LLMs). Thus, we propose that AI tools should be evaluated for clinical use based on their implementation risk, rather than interpretability. We introduce a practical decision algorithm for the clinical implementation of black-box AI by evaluating its risk of implementation. Applied to the case of an LLM for surgical informed consent, we assess a system's implementation risk by evaluating: (1) technical robustness, (2) implementation feasibility and (3) analysis of harms and benefits. Accordingly, the system is categorised as minimal-risk (standard use), moderate-risk (innovative use) or high-risk (experimental use). Recommendations for implementation are proportional to risk, requiring more oversight for higher-risk categories. The algorithm also considers the system's cost-effectiveness and patients' informed consent.
ISSN:1467-8519
Contient:Enthalten in: Bioethics
Persistent identifiers:DOI: 10.1111/bioe.70032