Explainable AI for Medical Diagnosis: A Review of Current Techniques

Authors

  • Sai Ganesh Palli
  • Chandra Sekhar Koppireddy
  • K. V. V. Subba Rao

Keywords:

Deep learning , Diagnostic accuracy, Explainable AI (XAI), Interpretability, Medical-diagnosis, Transparency

Abstract

The application of AI systems in medicine, especially in medical diagnosis, indicates the necessity of interpret-ability and transparency. Deep learning methods have achieved unprecedented success in identifying complex patterns and improving diagnostic accuracy, but their "black-box" status has the tendency to limit their concrete utility in high-stakes clinical environments. Explainable AI (XAI) systems have been envisioned as a solution to this, offering interpretability mechanisms to make algorithmic decision-making explainable to clinicians. The systems utilize an array of explanation methods visual, text, local, and global to render diagnostic conclusions intelligible, thus supporting clinical trust, accountability, and informed decision-making. Besides technical performance, the realities of incorporating XAI systems into standard clinical practice workflows, meeting future regulatory requirements, and enabling human-AI collaboration are also issues that need to be resolved. Of more significance, XAI not only plays a part in the verification of existing disease indicators but also in the identification of novel biomarkers that can enhance diagnostic procedures. These operational and systemic hurdles must be overcome if explainable AI is to be widely and seamlessly implemented across the healthcare system. As research progresses, the development of stable, user-centered, and regulation-compliant XAI systems will be instrumental in realizing the full potential of AI to improve patient care.

References

A. Biswas, “A Comprehensive Review of Explainable AI for Disease Diagnosis,” Array, pp. 100345–100345, Apr. 2024, doi: https://doi.org/10.1016/j.array.2024.100345.

J. Hou, J. Liu, Y. Bie, H. Wang, “Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks,” Arxiv (Cornell University), Oct. 2024, doi: https://doi.org/10.48550/arxiv.2410.02331.

Q. Sun, A. Akman, and B. W. Schuller, “Explainable Artificial Intelligence for Medical Applications: A Review,” Arxiv (Cornell University), Nov. 2024, doi: https://doi.org/10.48550/arxiv.2412.01829.

L. S. Wyatt, L. M. van Karnenbeek, M. Wijkhuizen, F. Geldof, and B. Dashtbozorg, “Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review,” Applied Sciences, vol. 14, no. 18, p. 8108, Sep. 2024, doi: https://doi.org/10.3390/app14188108.

I. D. Mienye , G. Obaido, N. Jere, E. Mienye, “A survey of explainable artificial intelligence in healthcare: Concepts, applications, and challenges,” Informatics in Medicine Unlocked, vol. 51, p. 101587, Oct. 2024, doi: https://doi.org/10.1016/j.imu.2024.101587.

S. S. Band, S. Yarahmadi, C. C. Hsu, “Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods,” Informatics in Medicine Unlocked, vol. 40, pp. 101286–101286, Jan. 2023, doi: https://doi.org/10.1016/j.imu.2023.101286.

A. Ghasemi, S. Hashtarkhani, D. L. Schwartz, and A. Shaban‐Nejad, “Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review,” Cancer Innovation, vol. 3, no. 5, Jul. 2024, doi: https://doi.org/10.1002/cai2.136.

Y. Hafeez, K. Memon, M. S. AL-Quraishi, and N. Yahya, “Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It,” Diagnostics, vol. 15, no. 2, pp. 168–168, Jan. 2025, doi: https://doi.org/10.3390/diagnostics15020168.

C. Metta, A. Beretta, R. Guidotti, Y. Yin, “Explainable Deep Image Classifiers for Skin Lesion Diagnosis,” Arxiv (Cornell University), Jan. 2021, doi: https://doi.org/10.48550/arxiv.2111.11863.

F. Mahmud, M. M. Mahin, Kabir, and Y. Abdullah, “An Interpretable Deep Learning Approach for Skin Cancer Categorization,” Arxiv (Cornell University), Jan. 2023, doi: https://doi.org/10.48550/arxiv.2312.10696.

A. M. Salih, I. B. Galazzo, P. Gkontra, E. Rauseo, “A review of evaluation approaches for explainable AI with applications in cardiology,” Artificial Intelligence Review, vol. 57, no. 9, Aug. 2024, doi: https://doi.org/10.1007/s10462-024-10852-w.

B. H. M. van der Velden, H. J. Kuijf, K. G. A. Gilhuijs, and M. A. Viergever, “Explainable artificial intelligence (XAI) in deep learning-based medical image analysis,” Medical Image Analysis, vol. 79, p. 102470, May 2022, doi: https://doi.org/10.1016/j.media.2022.102470.

O. Badaru I, T. A. Han, and S. Zia U, “Enhancing Cancer Diagnosis with Explainable & Trustworthy Deep Learning Models,” Arxiv (Cornell University), Dec. 2024, doi: https://doi.org/10.48550/arxiv.2412.17527.

V. M. Rao, S. Zhang, and J. N. Acosta1, “ReXErr: Synthesizing Clinically Meaningful Errors in Diagnostic Radiology Reports,” Arxiv.org, 2018. https://arxiv.org/html/2409.10829v1

Published

2025-07-08

How to Cite

Sai Ganesh Palli, Chandra Sekhar Koppireddy, & K. V. V. Subba Rao. (2025). Explainable AI for Medical Diagnosis: A Review of Current Techniques. Journal of Computer Science Engineering and Software Testing, 11(2), 32–49. Retrieved from https://matjournals.net/engineering/index.php/JOCSES/article/view/2143

Issue

Section

Articles