Colección SciELO Chile

Departamento Gestión de Conocimiento, Monitoreo y Prospección
Consultas o comentarios: productividad@anid.cl
Búsqueda Publicación
Búsqueda por Tema Título, Abstract y Keywords



Crossing the Trust Gap in Medical AI: Building an Abductive Bridge for xAI
Indexado
Scopus SCOPUS_ID:85201531768
DOI 10.1007/S13347-024-00790-4
Año 2024
Tipo

Citas Totales

Autores Afiliación Chile

Instituciones Chile

% Participación
Internacional

Autores
Afiliación Extranjera

Instituciones
Extranjeras


Abstract



In this paper, we argue that one way to approach what is known in the literature as the “Trust Gap” in Medical AI is to focus on explanations from an Explainable AI (xAI) perspective. Against the current framework on xAI – which does not offer a real solution – we argue for a pragmatist turn, one that focuses on understanding how we provide explanations in Traditional Medicine (TM), composed by human agents only. Following this, explanations have two specific relevant components: they are usually (i) social and (ii) abductive. Explanations, in this sense, ought to provide understanding by answering contrastive why-questions: “Why had P happened instead of Q?” (Miller in AI 267:1–38, 2019) (Sect. 1). In order to test the relevancy of this concept of explanation in medical xAI, we offer several reasons to argue that abductions are crucial for medical reasoning and provide a crucial tool to deal with trust gaps between human agents (Sect. 2). If abductions are relevant in TM, we can test the capability of Artificial Intelligence systems on this merit. Therefore, we provide an analysis of the capacity for social and abductive reasoning of different AI technologies. Accordingly, we posit that Large Language Models (LLMs) and transformer architectures exhibit a noteworthy potential for effective engagement in abductive reasoning. By leveraging the potential abductive capabilities of LLMs and transformers, we anticipate a paradigm shift in the integration of explanations within AI systems. This, in turn, has the potential to enhance the trustworthiness of AI-driven medical decisions, bridging the Trust Gap that has been a prominent challenge in the field of Medical AI (Sect. 3). This development holds the potential to not only improve the interpretability of AI-generated medical insights but also to guarantee that trust among practitioners, patients, and stakeholders in the healthcare domain is still present.

Revista



Revista ISSN
Philosophy & Technology 2210-5433

Métricas Externas



PlumX Altmetric Dimensions

Muestra métricas de impacto externas asociadas a la publicación. Para mayor detalle:

Disciplinas de Investigación



WOS
Sin Disciplinas
Scopus
Sin Disciplinas
SciELO
Sin Disciplinas

Muestra la distribución de disciplinas para esta publicación.

Publicaciones WoS (Ediciones: ISSHP, ISTP, AHCI, SSCI, SCI), Scopus, SciELO Chile.

Colaboración Institucional



Muestra la distribución de colaboración, tanto nacional como extranjera, generada en esta publicación.


Autores - Afiliación



Ord. Autor Género Institución - País
1 Gouveia, Steven S. - Universidade do Porto - Portugal
Universidad Nacional Andrés Bello - Chile
2 Malík, Jaroslav - Univerzita Hradec Králové - República Checa

Muestra la afiliación y género (detectado) para los co-autores de la publicación.

Financiamiento



Fuente
Fundação para a Ciência e a Tecnologia
Universidade do Porto
7th Rebuilding Trust
CEEC
Univerzita Hradec Králové

Muestra la fuente de financiamiento declarada en la publicación.

Agradecimientos



Agradecimiento
SG is funded by CEEC Individual Project by FCT 2022.02527.CEECIND, at the Mind, Language and Action Group, Institute of Philosophy, University of Porto (Faculdade de Letras, Via Panor\u00E2mica s/n, P-4150-564 Porto, Portugal). JM is funded by Specific Research project \u201CCurrent and future issues of AI: Need for explainable and ethical AI\u201D supported by the Philosophical Faculty of the University of Hradec Kr\u00E1lov\u00E9 in 2024.
The Authors would like to thank the anonymous reviewers and the Editor in Chief for the useful comments that improved this final version of the paper. SG was a Visiting Researcher at the Robotics Lab (A.S. Centre) at the University of Palermo during the revisions, and presented preliminary versions at the Palermo International Workshop 2024 and the World Congress of Philosophy 2024. JM was an Eramsus+ Visiting Researcher at the Faculty of Letters of the University of Porto during the production of the paper. He presented preliminary versions at the 7th Rebuilding Trust in AI Medicine Online Seminar at the MLAG Seminar at the University of Porto and the World Congress of Philosophy 2024. We would both like to thank the different audiences for their feedback, which inspired some changes in the previous versions of this paper.

Muestra la fuente de financiamiento declarada en la publicación.