Cross-domain Ethical AI: A Systematic Review of Sector-specific Challenges in Healthcare, Finance, and Criminal Justice Applications

Authors

  • Akshaya Punnamaraju
  • P. Devi Sravanthi
  • Manas Kumar Yogi

Abstract

Artificial intelligence (AI) is increasingly embedded in high-stakes domains such as healthcare, criminal justice, and finance, delivering significant operational and societal benefits while simultaneously introducing complex ethical challenges. This study critically examines the ethical implications of AI deployment across these sectors using four foundational ethical principles: beneficence, non-maleficence, autonomy, and justice. In healthcare, AI-driven systems support early diagnosis, personalized treatment, and clinical decision-making; however, they raise serious concerns related to patient autonomy, informed consent, data privacy, and cybersecurity. Within the criminal justice system, predictive analytics and risk assessment tools influence policing strategies and sentencing decisions, offering efficiency gains but also amplifying risks of algorithmic bias, opacity, and the reinforcement of historical inequalities. In the financial sector, applications such as algorithmic trading, fraud detection, and automated credit scoring enhance speed and accuracy, yet may compromise transparency, accountability, and equitable access to financial services. Through a comparative, cross-domain analysis, this study identifies common ethical challenges including bias mitigation, explainability, and accountability while also underscoring sector-specific risks and governance requirements. The findings emphasize that responsible AI adoption necessitates comprehensive regulatory frameworks, interdisciplinary collaboration, stakeholder oversight, and continuous ethical auditing to ensure that technological innovation remains aligned with human values, social justice, and long-term societal well-being.

References

H. Ali and A. F. Aysan, “Ethical dimensions of generative AI: A cross-domain analysis using machine learning structural topic modeling,” International Journal of Ethics and Systems, vol. 41, no. 1, pp. 3–34, 2025.

D. S. Ametefe, S. S. Sarnin, D. M. Ali, W. N. W. Muhamad, G. D. Ametefe, D. John, and A. A. Aliu, “Enhancing fingerprint authentication: A systematic review of liveness detection methods against presentation attacks,” Journal of The Institution of Engineers (India): Series B, vol. 105, no. 5, pp. 1451–1467, 2024.

F. Bialy, M. Elliot, and R. Meckin, “Perceptions of AI across sectors: A comparative review of public attitudes,” arXiv preprint arXiv:2509.18233, Sep. 2025.

D. Deckker and S. Sumanasekara, “Bias in AI models: Origins, impact, and mitigation strategies,” Preprints, 2025.

J. Du, “Toward responsible and beneficial AI: Comparing regulatory and guidance-based approaches,” arXiv preprint arXiv:2508.00868, 2025.

Y. Duan, G. Fan, S. Huang, and S. Gong, Cross-Domain Research Report on Cybersecurity Economics, Technical Report, Aug. 2025.

Z. Diyin and A. Bhaumik, “The impact of artificial intelligence on business strategy: A review of theoretical and empirical studies in China,” International Journal of Advances in Business and Management Research, vol. 2, no. 3, pp. 9–17, 2025.

M. M. Ferdaus, M. Abdelguerfi, E. Ioup, K. N. Niles, K. Pathak, and S. Sloan, “Towards trustworthy AI: A review of ethical and robust large language models,” arXiv preprint arXiv:2407.13934, 2024.

J. Greg, “Designing adaptive AI compliance architectures for multi-sector governance in regulated industries,” Research Article, Aug. 2025.

Y. Huang, C. Arora, W. C. Houng, T. Kanij, A. Madulgalla, and J. Grundy, “Ethical concerns of generative AI and mitigation strategies: A systematic mapping study,” arXiv preprint arXiv:2502.00015, 2025.

Y. Jiang, J. Zhao, Y. Yuan, T. Zhang, Y. Huang, Y. Zhang, and X. Li, “Never compromise to vulnerabilities: A comprehensive survey on AI governance,” arXiv preprint arXiv:2508.08789, 2025.

V. Jain and R. Verma, “Governing the artificial intelligence of things: Navigating techno-legal challenges in a connected world,” SSRN Scholarly Paper, no. 5254579, May 2025.

M. N. I. Khan, “A systematic review of legal technology adoption in contract management, data governance, and compliance monitoring,” American Journal of Interdisciplinary Studies, vol. 3, no. 1, pp. 1–30, 2022.

F. H. O. Kolo, “Responsible AI for cybersecurity: Assessing the barriers, biases and governance gaps in implementation with e-commerce systems,” Journal of Engineering Research and Reports, vol. 27, no. 5, pp. 510–532, May 2025.

J. A. McDermid, Y. Jia, Z. Porter, and I. Habli, “Artificial intelligence explainability: The technical and ethical dimensions,” Philosophical Transactions of the Royal Society A, vol. 379, no. 2207, p. 20200363, 2021.

P. A. Moreno-Sánchez, J. Del Ser, M. van Gils, and J. Hernesniemi, “A design framework for operationalizing trustworthy artificial intelligence in healthcare: Requirements, tradeoffs and challenges for its clinical adoption,” arXiv preprint arXiv:2504.19179, 2025.

R. Raman, A. Iyer, and P. Nedungadi, “Forecasting artificial general intelligence for sustainable development goals: A data-driven analysis of research trends,” Sustainability, vol. 17, no. 16, p. 7347, 2025.

D. S. Schiff, S. Kelley, and J. Camacho Ibáñez, “The emergence of artificial intelligence ethics auditing,” Big Data & Society, vol. 11, no. 4, 2024. doi: 10.1177/20539517241299732.

H. N. Himabindu and Gurajada, “From data to decisions: Harnessing AI and analytics,” International Journal of Artificial Intelligence, Data Science, and Machine Learning, vol. 4, no. 3, pp. 76–84, 2023.

Published

2026-02-19

How to Cite

Punnamaraju, A., Sravanthi, P. D., & Manas Kumar Yogi. (2026). Cross-domain Ethical AI: A Systematic Review of Sector-specific Challenges in Healthcare, Finance, and Criminal Justice Applications. Journal of Information Security System and Cyber Criminology Research, 3(1), 28–39. Retrieved from https://matjournals.net/engineering/index.php/JoISSCCR/article/view/3126