Journal of Computer Based Parallel Programming https://matjournals.net/engineering/index.php/JoCPP <p><strong>JoCPP</strong> is a peer reviewed journal in the discipline of Computer Science published by the MAT Journals Pvt. Ltd. It is a print and e-journal focused towards the rapid publication of fundamental research papers on all areas of Parallel Programming. This journal involves the basic principles of writing parallel programs which can be compiled and executed.</p> en-US Thu, 04 Sep 2025 11:51:13 +0000 OJS 3.3.0.8 http://blogs.law.harvard.edu/tech/rss 60 Trust, Transparency, and Accountability in AI: The Role of Explainability https://matjournals.net/engineering/index.php/JoCPP/article/view/2438 <p><em>As Artificial Intelligence (AI) systems become more sophisticated, the demand for transparency and interpretability grows. This paper explores the emerging domain of Explainable Artificial Intelligence (XAI), highlighting its importance in enhancing trust, accountability, and decision support in AI-driven systems. It examines both symbolic and sub-symbolic models, with a particular focus on deep neural networks and their ability to aggregate and process complex data. Furthermore, this survey reviews recent research efforts in XAI across key sectors such as finance, healthcare, and autonomous systems. The study outlines the advantages of XAI, including improved user trust, reduced cognitive burden, and better management of computational overhead. It also identifies critical gaps in current research, proposing future directions that emphasize human-centered design, human-cognitive alignment, and explainability directives. Ultimately, the paper argues that explainability is essential to the efficacy and ethical deployment of AI-operated decision-making systems.</em></p> Surya Teja Gunipe, Shaik Fuzaila Farhatunnisa, Chandra Sekhar Koppireddy Copyright (c) 2025 Journal of Computer Based Parallel Programming https://matjournals.net/engineering/index.php/JoCPP/article/view/2438 Fri, 12 Sep 2025 00:00:00 +0000 Role of Import in Java Language and Its Application https://matjournals.net/engineering/index.php/JoCPP/article/view/2785 <p><em>This research paper focuses on the import statement in Java, which is a compile-time directive that specifies which classes, interfaces, and static members from other packages are accessible to the current source code file. It is a programmer’s convenience that improves code readability and organization by allowing the use of a class’s simple name instead of its fully qualified name. The import parameter does not include the code in the program, nor does it impact runtime performance. The import statement in Java serves the primary purpose of making classes and interfaces from other packages accessible within the current Java source file without requiring their fully qualified names. The import utilises in Java offers several benefits, primarily centered around code readability, conciseness, and maintainability. The function of import statements explicitly declares which external classes and packages are being used within a file. This provides a clear overview of dependencies, making it easier to understand the code’s structure and its interactions with other parts of the application or external libraries.</em></p> Padma Lochan Pradhan Copyright (c) 2025 Journal of Computer Based Parallel Programming https://matjournals.net/engineering/index.php/JoCPP/article/view/2785 Thu, 04 Dec 2025 00:00:00 +0000 A Review of Superintelligent AI for Development of Autonomous Weapons Systems https://matjournals.net/engineering/index.php/JoCPP/article/view/2816 <p><em>The swift advancement in Artificial Intelligence (AI) has sparked growing interest in the potential for superintelligent AI architectures to develop extremely effective Autonomous Weapons Systems (AWS). This review investigates how advanced theories of decision-making, sensor fusion, perception, autonomous navigation, and target-engagement decision-making now operate, or could operate if we consider superintelligence. It examines multi-objective optimization for target selection; real-time strategic planning; adaptive learning during combat; multi-modal, sensor-fused, reliable perception; obstacle avoidance and swarm coordination; and human-machine interfaces and communications. It then reflects on the legal, ethical, and security implications, including compliance with international humanitarian law, value alignment, human dignity, adversarial and cyber security risks, and governance and oversight questions. Finally, it summarizes essential technical challenges and research directions for the future, including some combinations of computational and hardware constraints, explainability, testing and validation, human oversight mechanisms, and emerging integration issues. The review concludes by asserting that while superintelligent AI holds remarkable promise for AWS enhancements, the resulting risk profile remains high, particularly around accountability, misuse, and unintended harm, as well as governance. The responsible way forward will require a combination of technology safeguards, legal reform, and robust international treaty mechanisms.</em></p> Manas Kumar Yogi, Patnala Naga Dilpakshaya, P. Devi Sravanthi Copyright (c) 2025 Journal of Computer Based Parallel Programming https://matjournals.net/engineering/index.php/JoCPP/article/view/2816 Wed, 10 Dec 2025 00:00:00 +0000 Advancing Modern Education through Explainable Artificial Intelligence: Enhancing Learning Outcomes and Teaching with Transparent AI Systems https://matjournals.net/engineering/index.php/JoCPP/article/view/2821 <p><em>The incorporation of Explainable Artificial Intelligence (XAI) into contemporary teaching presents a revolutionary paradigm designed to improve learning challenges and empower instructors while simultaneously responding to ethical concerns surrounding AI implementation. As educational technologies powered by AI, such as intelligent tutoring systems, adaptive learning systems, and automated grading software, increasingly dominate educational environments, doubts concerning transparency, trustworthiness, and accountability have become more pronounced. This research looks at current applications of XAI in the education environment and highlights approaches that enable interpretable explanations of AI-powered decisions to enhance comprehension and trust among students, teachers, and administrators. Using a sample of 200 students, the study designed AI models to forecast student pass-fail outcomes, with attendance, assignment grades, and test grades as predictor variables. The models showed excellent performance indices, with 80% accuracy, 85% precision, 77% recall, and an F1 value of 81, indicating the predictive trustworthiness of the method. The use of naturally interpretable models, such as decision trees, was supplemented with post-hoc, model-agnostic explanation techniques, including SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), to enhance interpretability. The tools discussed yielded fine-grained insights into feature contributions, rendering it feasible to allow instructors to know AI projects in depth and align student interventions. Moral aspects, i.e., anonymisation of data, bias identification, and verification of fairness, were carried out with utmost care, resulting in a 20% reduction of the difference of prediction between demographically-defined groups. Comments received from instructors underlined appreciation of additional trust, engagement, and informed decision-making against insightful AI intelligence. The combination of innovations that are at once technical and human-centric demonstrates how XAI enables fair, transparent, and efficient educational practice. The findings add to the growing scholarship on the same.</em></p> R. Naveenkumar Copyright (c) 2025 Journal of Computer Based Parallel Programming https://matjournals.net/engineering/index.php/JoCPP/article/view/2821 Thu, 11 Dec 2025 00:00:00 +0000 Secure Data Transmission Using GEM Firewall: A High-Performance Rule Matching and Encryption Framework https://matjournals.net/engineering/index.php/JoCPP/article/view/2835 <p><em>Modern enterprise networks demand scalable and high-speed protection mechanisms capable of defending against sophisticated cyberattacks. Traditional firewalls suffer performance limitations due to linear rule evaluation, slow packet classification, and inefficient state tracking. This paper presents a comprehensive, GEM-based secure firewall system integrated with encryption, detection, heuristic rule splitting, and steganographic key management modules to ensure secure and high-performance data transmission. The proposed framework demonstrates measurable improvements in packet-matching efficiency, memory reduction, throughput, and encrypted communication security. Extensive simulations confirm improvements over traditional packet filter firewalls. </em></p> N. Balasubramanian, A. Ruba Copyright (c) 2025 Journal of Computer Based Parallel Programming https://matjournals.net/engineering/index.php/JoCPP/article/view/2835 Wed, 17 Dec 2025 00:00:00 +0000