A Comprehensive Review on Explainable Artificial Intelligence (XAI) Techniques for Healthcare Diagnosis and Decision Support Systems

Authors

  • Swati Khanve
  • Vinay Lowanshi
  • Minakshi Solanki

Keywords:

Clinical decision support, EEG, Explainable AI (XAI), Grad-CAM, Healthcare, Interpretability, LIME, Medical imaging, SHAP

Abstract

Explainable Artificial Intelligence (XAI) has become a critical research area for deploying AI in healthcare, where safety, accountability, trust, and regulatory compliance are essential. This paper reviews XAI approaches applied to healthcare diagnosis and decision support systems, provides a taxonomy of methods (model-intrinsic vs model-agnostic; global vs local; visual vs attributional vs rule-based), compares representative techniques (SHAP, LIME, Grad-CAM, Integrated Gradients, attention mechanisms, rule/exemplar approaches), and surveys application domains including medical imaging, electronic health records (EHR), and physiological signal analysis (e.g., EEG). We synthesise comparative strengths/limitations, summarise clinical evaluation strategies, and identify open research gaps — including clinical validation, standardised XAI evaluation metrics, multimodal explanations, and regulatory alignment. Finally, we propose future directions to accelerate clinically-trusted XAI adoption.

References

A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, Opportunities and Challenges toward Responsible AI,” Information Fusion, vol. 58, no. 1, pp. 82–115, Jun. 2020, doi: https://doi.org/10.1016/j.inffus.2019.12.012

E. Tjoa and C. Guan, “A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 1–21, 2020, doi: https://doi.org/10.1109/tnnls.2020.3027314

S. Iftikhar, N. Anjum, A. B. Siddiqui, M. Ur Rehman, and N. Ramzan, “Explainable CNN for brain tumor detection and classification through XAI based key features identification,” Brain Informatics, vol. 12, no. 1, Apr. 2025, doi: https://doi.org/10.1186/s40708-025-00257-y

A. Kuppa and N.-A. Le-Khac, “Adversarial XAI methods in Cybersecurity,” IEEE Transactions on Information Forensics and Security, pp. 1–1, 2021, doi: https://doi.org/10.1109/tifs.2021.3117075

K. Kalasampath, S. KN, S. Sajeev, S. S. Kuppa, K. Ajay, and M. Angulakshmi, “A Literature Review on Applications of Explainable Artificial Intelligence (XAI),” IEEE Access, pp. 1–1, 2025, doi: https://doi.org/10.1109/access.2025.3546681

E. S. Ortigossa, T. Gonçalves, and L. G. Nonato, “EXplainable Artificial Intelligence (XAI) – From Theory to Methods and Applications,” IEEE Access, vol. 12, pp. 80799–80846, Jan. 2024, doi: https://doi.org/10.1109/access.2024.3409843

M.Saarela and V. Podgorelec, “Recent Applications of Explainable AI (XAI): A Systematic Literature Review,” Applied Sciences, vol. 14, no. 19, pp. 8884–8884, Oct. 2024, doi: https://doi.org/10.3390/app14198884

L. Zhu, R.Wang, X. Jin, Y. Li, “Explainable Depression Classification Based on EEG Feature Selection From Audio Stimuli,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 33, pp. 1411–1426, Jan. 2025, doi: https://doi.org/10.1109/tnsre.2025.3557275

Z. Cheng, Y. Wu, Y. Li, L. Cai, and B. Ihnaini, “A Comprehensive Review of Explainable Artificial Intelligence (XAI) in Computer Vision,” Sensors, vol. 25, no. 13, p. 4166, Jul. 2025, doi: https://doi.org/10.3390/s25134166

A. M. Salih, I.B. Galazzo, P. Gkontra, “A review of evaluation approaches for explainable AI with applications in cardiology,” Artificial Intelligence Review, vol. 57, no. 9, Aug. 2024, doi: https://doi.org/10.1007/s10462-024-10852-w

Published

2025-12-23

Issue

Section

Articles