Trust, Transparency, and Accountability in AI: The Role of Explainability
Keywords:
Accountability, Explainable artificial intelligence, Human-centred AI, Local interpretable model-agnostic explanations (LIME), SAPAbstract
As Artificial Intelligence (AI) systems become more sophisticated, the demand for transparency and interpretability grows. This paper explores the emerging domain of Explainable Artificial Intelligence (XAI), highlighting its importance in enhancing trust, accountability, and decision support in AI-driven systems. It examines both symbolic and sub-symbolic models, with a particular focus on deep neural networks and their ability to aggregate and process complex data. Furthermore, this survey reviews recent research efforts in XAI across key sectors such as finance, healthcare, and autonomous systems. The study outlines the advantages of XAI, including improved user trust, reduced cognitive burden, and better management of computational overhead. It also identifies critical gaps in current research, proposing future directions that emphasize human-centered design, human-cognitive alignment, and explainability directives. Ultimately, the paper argues that explainability is essential to the efficacy and ethical deployment of AI-operated decision-making systems.
References
S. M. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” Nips.cc, pp. 4765–4774, 2017, Available: https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions
M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD’16, pp. 1135–1144, Aug. 2016, doi: https://doi.org/10.1145/2939672.2939778.
A. M. Salih, Z. R. Estabragh, I. B. Galazzo, P. Radeva, “A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME,” Advanced Intelligent Systems, vol. 7, no. 1, Jun. 2024, doi: https://doi.org/10.1002/aisy.202400304.
A. Saranya and R. Subhashini, “A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends,” Decision Analytics Journal, vol. 7, pp. 100230–100230, Apr. 2023, doi: https://doi.org/10.1016/j.dajour.2023.100230.
H. Vainio-Pekka, M. Agbese, M. Jantunen, “The Role of Explainable AI in the Research Field of AI Ethics,” ACM Transactions on Interactive Intelligent Systems, vol. 13, no. 4, Jun. 2023, doi: https://doi.org/10.1145/3599974.
A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160, Oct. 2018, doi: https://doi.org/10.1109/access.2018.2870052.
S. Ali, T. Abuhmed, S. El-Sappagh, “Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence,” Information Fusion, vol. 99, no. 101805, p. 101805, Apr. 2023, doi: https://doi.org/10.1016/j.inffus.2023.101805.
E. Tjoa and C. Guan, “A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 1–21, 2020, doi: https://doi.org/10.1109/tnnls.2020.3027314.
R. Dwivedi, “Explainable AI (XAI): Core Ideas, Techniques and Solutions,” ACM Computing Surveys, vol. 55, no. 9, pp. 1–33, Sep. 2022, doi: https://doi.org/10.1145/3561048.
A. F. Markus, J. A. Kors, and P. R. Rijnbeek, “The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies,” Journal of Biomedical Informatics, vol. 113, no. 103655, Jan. 2021, doi: https://doi.org/10.1016/j.jbi.2020.103655.
D. Minh, H. X. Wang, Y. F. Li, and T. N. Nguyen, “Explainable artificial intelligence: a comprehensive review,” Artificial Intelligence Review, vol. 55, Nov. 2021, doi: https://doi.org/10.1007/s10462-021-10088-y.
H. Hassan, Z. Ren, H. Zhao, S. Huang, “Review and classification of AI-enabled COVID-19 CT imaging models based on computer vision tasks,” Computers in Biology and Medicine, vol. 141, p. 105123, Feb. 2022, doi: https://doi.org/10.1016/j.compbiomed.2021.105123.
A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, Opportunities and Challenges toward Responsible AI,” Information Fusion, vol. 58, no. 1, pp. 82–115, Jun. 2020, doi: https://doi.org/10.1016/j.inffus.2019.12.012.
S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation,” PLOS ONE, vol. 10, no. 7, p. e0130140, Jul. 2015, doi: https://doi.org/10.1371/journal.pone.0130140.
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” Openaccess.Thecvf.com, 2017. https://openaccess.thecvf.com/content_iccv_2017/html/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.html
A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Mar. 2018, doi: https://doi.org/10.1109/wacv.2018.00097.
F. Doshi-Velez and B. Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” Arxiv:1702.08608, vol. 2, no. 2, Mar. 2017, Available: https://arxiv.org/abs/1702.08608
J. Zhou, A. H. Gandomi, F. Chen, and A. Holzinger, “Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics,” Electronics, vol. 10, no. 5, p. 593, Mar. 2021, Available: https://res.mdpi.com/electronics/electronics-10-00593/article_deploy/electronics-10-00593-v3.pdf
J. Amann, A. Blasimme, E. Vayena, D. Frey, and V. I. Madai, “Explainability for artificial intelligence in healthcare: a multidisciplinary perspective,” BMC Medical Informatics and Decision Making, vol. 20, no. 1, Nov. 2020. https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-020-01332-6
G. Vilone and L. Longo, “Notions of explainability and evaluation approaches for explainable artificial intelligence,” Information Fusion, vol. 76, pp. 89–106, Dec. 2021, doi: https://doi.org/10.1016/j.inffus.2021.05.009.