A Critical Review of Application of XAI Models in Cyber Security
Keywords:
Attack, Cyber security, Explainable AI, Privacy, Security, ThreatAbstract
Incorporating Artificial Intelligence (AI) and Machine Learning (ML) in cyber security has revolutionized threat detection, anomaly identification, and incident response. However, the opaque nature of many AI models poses a significant challenge regarding trust and interpretability, which are crucial for their adoption in critical security operations. Explainable AI (XAI) seeks to address these issues by providing insights into the decision making processes of AI systems. This critical review examines the application of XAI in cyber security, highlighting its advantages, such as enhanced trust, regulatory compliance, and improved threat response. It also discusses the challenges of implementing XAI, including computational complexity and potential security risks. The review emphasizes the need for a balanced approach that leverages XAI to make AI driven security systems more transparent and accountable while maintaining high performance. Future directions for research and development in XAI for cyber security are also explored to ensure robust, trustworthy, and effective cyber defence mechanisms.