Advancing Privacy Standards and Evaluation Metrics for Robust AI Model Deployment

Authors

  • G.V. Rajeswari
  • Manas Kumar Yogi

Keywords:

Attack, GDPR , HIPPA, Privacy, Security

Abstract

The transformative potential of generative AI, encompassing large language models (LLMs) and generative adversarial networks (GANs), is undeniable, revolutionizing content creation, automation, and AI applications.  However, this rapid progress is accompanied by escalating security and privacy vulnerabilities.  The very capabilities that make these models so powerful also create opportunities for malicious exploitation, including data leakage, adversarial attacks, and the exposure of sensitive information.  This paper delves into the critical area of privacy-preserving techniques for generative AI, exploring promising solutions such as differential privacy, federated learning, and homomorphic encryption.  We analyze the landscape of emerging security threats, including model inversion attacks aimed at reconstructing training data, and prompt injection attacks designed to manipulate model behavior.  Furthermore, we review the evolving landscape of data privacy standards and regulations, with a particular focus on the General Data Protection Regulation (GDPR), to understand the legal and ethical implications of deploying generative AI.  A key contribution of this work is the proposal of secure deployment practices, offering practical guidelines for mitigating privacy risks throughout the generative AI lifecycle.  This paper aims to establish a robust and comprehensive framework for evaluating and addressing privacy risks, ensuring compliance with relevant regulations, and ultimately promoting the responsible and secure utilization of generative AI across diverse industries.  By addressing these critical challenges, we can harness the immense potential of generative AI while safeguarding individual privacy and fostering trust in these powerful technologies.

References

M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, "From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy," IEEE Access, vol. 11, pp. 80218–80245, 2023, DOI: https://10.1109/ACCESS.2023.10198233.

K. O. Krishnamurthy, "Enhancing Cyber Security Through Generative AI," International Journal of Universal Science and Engineering, vol. 9, pp. 35-50, 2023. https://ijuse.org/admin1/upload/06%20Oku%20Krishnamurthy%2001155.pdf.

H. S. Mavikumbure, V. Cobilean, C. S. Wickramasinghe, D. Drake, and M. Manic, "Generative AI in Cyber Security of Cyber-Physical Systems: Benefits and Threats," 2024 16th International Conference on Human System Interaction (HSI), Paris, France, 2024, pp. 1-8. https://ieeexplore.ieee.org/abstract/document/10613562.

S. Sai, U. Yashvardhan, V. Chamola, and B. Sikdar, "Generative AI for Cyber Security: Analyzing the Potential of ChatGPT, DALL-E, and Other Models for Enhancing the Security Space," in IEEE Access, vol. 12, pp. 53497-53516, 2024. https://ieeexplore.ieee.org/abstract/document/10491270.

G. Feretzakis, K. Papaspyridis, A. Gkoulalas-Divanis, and V. S. Verykios, "Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review," Information, vol. 15, no. 11, p. 697, Nov. 2024. https://www.mdpi.com/2078-2489/15/11/697.

A. Tomassi, Data Security and Privacy Concerns for Generative AI Platforms, M.S. thesis, Dept. of Computer Engineering, Politecnico di Torino, Turin, Italy, 2024. Supervisor: F. Valenza. http://webthesis.biblio.polito.it/id/eprint/33202.

D. G. Takale, P. N. Mahalle, and B. Sule, "Cyber Security Challenges in Generative AI Technology," Journal of Network Security and Computer Networks, vol. 10, no. 1, pp. 28-34, Apr. 16, 2024. https://matjournals.net/engineering/index.php/JONSCN/article/view/326.

R. Pasupuleti, R. Vadapalli and C. Mader, "Cyber Security Issues and Challenges Related to Generative AI and ChatGPT," 2023 Tenth International Conference on Social Networks Analysis, Management and Security (SNAMS), Abu Dhabi, United Arab Emirates, 2023, pp. 1-5. https://ieeexplore.ieee.org/abstract/document/10375472

H. Xu, Y. Li, O. Balogun, S. Wu, Y. Wang, and Z. Cai, "Security Risks Concerns of Generative AI in the IoT," in IEEE Internet of Things Magazine, vol. 7, no. 3, pp. 62-67, May 2024. https://ieeexplore.ieee.org/abstract/document/10517500.

Chattopadhyay, Robust AI: Security and Privacy Issues in Machine Learning, 2023. https://dr.ntu.edu.sg/handle/10356/165248.

A. Yazdinejad, A. Dehghantanha, H. Karimipour, G. Srivastava and R. M. Parizi, "A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks," in IEEE Transactions on Information Forensics and Security, vol. 19, pp. 6693-6708, 2024. https://ieeexplore.ieee.org/abstract/document/10574838.

B. Chander, C. John, L. Warrier, and K. Gopalakrishnan, "Toward trustworthy artificial intelligence (TAI) in the context of explainability and robustness," ACM Computing Surveys, 2024. https://dl.acm.org/doi/abs/10.1145/3675392.

Published

2025-02-13