Explainability, Public Reason, and Medical Artificial Intelligence

The contention that medical artificial intelligence (AI) should be "explainable" is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that "explainability" is not a stable concept; non-explainable AI is often more accurate; mechanis...

Ausführliche Beschreibung

Gespeichert in:  
Bibliographische Detailangaben
1. VerfasserIn: Da Silva, Michael (VerfasserIn)
Medienart: Elektronisch Aufsatz
Sprache:Englisch
Verfügbarkeit prüfen: HBZ Gateway
Journals Online & Print:
Invalid server response. (JOP server down?)
Fernleihe:Fernleihe für die Fachinformationsdienste
Veröffentlicht: 2023
In: Ethical theory and moral practice
Jahr: 2023, Band: 26, Heft: 5, Seiten: 743-762
RelBib Classification:NCD Politische Ethik
NCJ Wissenschaftsethik
VA Philosophie
ZC Politik
ZG Medienwissenschaft; Digitalität; Kommunikationswissenschaft
weitere Schlagwörter:B Artificial Intelligence
B Political Philosophy
B Ai
B Public Reason
B Governance
Online-Zugang: Volltext (kostenfrei)
Beschreibung
Zusammenfassung:The contention that medical artificial intelligence (AI) should be "explainable" is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that "explainability" is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent "political response" to these issues contends that AI used in high-stakes scenarios, including medical AI, must be explainable to meet basic standards of legitimacy: People are owed reasons for decisions that impact their vital interests, and this requires explainable AI. This article demonstrates why the political response fails. Attending to systemic considerations, as its proponents desire, suggests that the political response is subject to the same criticisms as other arguments for explainable AI and presents new issues. It also suggests that decision-making about non-explainable medical AI can meet public reason standards. The most plausible version of the response amounts to a simple claim that public reason demands reasons why AI is permitted. But that does not actually support explainable AI or respond to criticisms of strong requirements for explainable medical AI.
ISSN:1572-8447
Enthält:Enthalten in: Ethical theory and moral practice
Persistent identifiers:DOI: 10.1007/s10677-023-10390-4