Training Machine Learning Models with Generative AI for Real World Cyber Threat Emulation

Authors

  • Gaduthuri Alekhya Undergraduate Student, Department of Computer Science and Engineering, Pragati Engineering College, Surampalem, Andhra Pradesh, India
  • Alugolu Avinash Associate Professor, Department of Computer Science and Engineering, Pragati Engineering College, Surampalem, Andhra Pradesh, India

Keywords:

AI-driven cyber, Cyber threat emulation, Cyber threat modelling, Generative AI, Machine learning in cyber security, Training ML Models

Abstract

Cyber threats are evolving rapidly, making traditional security measures less effective against advanced attacks like zero-day exploits and adaptive malware. This research explores how generative AI, including Generative Adversarial Networks (GANs) and Large Language Models (LLMs) can simulate realistic cyber-attacks to improve Machine Learning (ML) models for threat detection and response. By generating synthetic yet lifelike cyber threats such as AI-crafted phishing emails, deep fake social engineering, and polymorphic malware organizations can train ML models to detect and prevent emerging threats more effectively. While AI-driven cyber threat emulation strengthens security defences, it also raises concerns about misuse, ethical challenges, and regulatory gaps. The study emphasizes the need for explainable AI (XAI), responsible AI governance, and human-AI collaboration to ensure cyber security advancements are used ethically and effectively. Looking ahead, future research should focus on building robust, interpretable AI models, improving real-time threat detection, and developing ethical frameworks to guide AI-driven cyber security. By harnessing AI’s potential in cyber threat emulation, organizations can stay ahead of evolving threats and create a more resilient digital environment.

References

A. B. Ajmal, S. Khan, M. Alam, A. Mehbodniya, J. Webber, and A. Waheed, “Towards Effective Evaluation of Cyber Defense: Threat Based Adversary Emulation Approach,” IEEE Access, pp. 1–1, 2023, doi: https://doi.org/10.1109/access.2023.3272629

M. A. Ferriage, M. Debbah, and M. Al-Hawawreh, “Generative AI for Cyber Threat-Hunting in 6G-enabled IoT Networks,” Proc. 2023 IEEE/ACM 23rd Int. Symp. Cluster, Cloud Internet Computer. Workshop, May 2023, doi: https://doi.org/10.1109/ccgridw59191.2023.00018

Y. Yigit, W. J. Buchanan, M. G. Tehrani, and L. Maglaras, “Review of Generative AI Methods in cyber security,” Arxiv (Cornell University), Mar. 2024, doi: https://doi.org/10.48550/arxiv.2403.08701

S. Metta, I. Chang, J. Parker, M. P. Roman, and A. F. Ehuan, “Generative AI in cyber security,” arXiv.org, May 02, 2024. https://arxiv.org/abs/2405.01674

N. Sun et al., “Cyber threat intelligence mining for proactive cyber security defense: A survey and new perspectives,” IEEE Communications Surveys & Tutorials, vol. 25, no. 3, pp. 1–1, 2023, doi: https://doi.org/10.1109/comst.2023.3273282

H. S. Mavikumbure, V. Cobilean, C. S. Wickramasinghe, D. Drake, and Milos Manic, “Generative AI in Cyber Security of Cyber Physical Systems: Benefits and Threats,” 16th International Conference on Human System Interaction (HSI), vol. 15, pp. 1–8, Jul. 2024, doi: https://doi.org/10.1109/hsi61632.2024.10613562

M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From ChatGPT to ThreatGPT: Impact of Generative AI in cyber security and Privacy,” IEEE Access, vol. 11, pp. 80218–80245, Aug. 2023, doi: https://doi.org/10.1109/ACCESS.2023.3300381

R. Pasupuleti, R. Vadapalli, and C. Mader, “Cyber Security Issues and Challenges Related to Generative AI and ChatGPT,” Proc. 10th Int. Conf. Social Netw. Anal., Manag. Secure. , Nov. 2023, doi: https://doi.org/10.1109/snams60348.2023.10375472

J. Gregory and Q. Liao, “Autonomous Cyber-attack with Security-Augmented Generative Artificial Intelligence,” Proc. 2024 IEEE Int. Conf. Cyber Secur. Resilience, pp. 270–275, Sep. 2024, doi: https://doi.org/10.1109/csr61664.2024.10679470

I. Hasanov, S. Virtanen, A. Hakkala, and J. Isoaho, “Application of Large Language Models in cyber security: A Systematic Literature Review,” IEEE Access, vol. 12, pp. 176751–176778, 2024, doi: https://doi.org/10.1109/access.2024.3505983

Published

2025-03-11

Issue

Section

Articles