https://matjournals.net/engineering/index.php/JoHTDCPCV/issue/feed Journal of Hacking Techniques, Digital Crime Prevention and Computer Virology 2025-12-25T18:07:50+00:00 Open Journal Systems <p><strong>JoHTDCPCV</strong> is a peer reviewed journal of Computer Science domain published by MAT Journals Pvt. Ltd. It is a print and e-journal focused towards the rapid publication of fundamental research papers on all areas of Hacking Techniques, Digital Crime Prevention and Computer Virology. The hacking Techniques include Phishing, Fake WAP's (Wireless Access Point), Waterhole Attacks, Brute Forcing, Bait &amp; Switch, and Click Jacking. JoHTDCPCV also covers Computer Virology and its theoretical underpinnings, mathematical aspects, algorithmics, Computer Immunology, and Biological Models for Computers but the scope of this journal is not limited to this. Other topics include Reverse Engineering (Hardware and Software), Viral and Antiviral Technologies, Tools and Techniques for Cryptology and Steganography, applications in Computer Virology, Virology and IDS, Hardware Hacking, Free and Open Hardware, Operating System, Network, and Embedded Systems Security, and Social Engineering.</p> https://matjournals.net/engineering/index.php/JoHTDCPCV/article/view/2397 A Comprehensive Study for Integrating Pedersen Commitments with Zero-Knowledge Proofs in SMPC 2025-09-03T12:10:46+00:00 Koppula Lakshmi Sowjanya koppulalakshmisowjanya@gmail.com Manas Kumar Yogi manas.yogi@gmail.com <p><em>This study explores a hybrid Secure Multi-Party Computation (SMPC) protocol combining Pedersen commitments and Zero-Knowledge Proofs (ZKPs) to ensure input integrity and computational correctness while preserving privacy. We propose a comprehensive framework for secure collaborative computations, addressing critical challenges in scalability, security, and efficiency. The proposed methodology is rigorously evaluated through theoretical analysis, highlighting its advantages and potential applications in cutting-edge domains such as blockchain technology and privacy-preserving data analytics. This research aims to provide a deep and nuanced perspective on how these advanced cryptographic primitives can be effectively integrated to significantly enhance the robustness, practical applicability, and trustworthiness of SMPC in real-world scenarios, particularly in sensitive domains such as finance, healthcare, and decentralized systems, where data confidentiality and integrity are paramount. </em></p> 2025-09-04T00:00:00+00:00 Copyright (c) 2025 Journal of Hacking Techniques, Digital Crime Prevention and Computer Virology https://matjournals.net/engineering/index.php/JoHTDCPCV/article/view/2500 Federated Learning for Privacy preserving Health Monitoring in Multi Hospital Smart Healthcare Systems 2025-09-30T09:45:53+00:00 Akshitha Korlepara chandrasekhar.koppireddy@gmail.com Satya Hanvitha Goli chandrasekhar.koppireddy@gmail.com Amrutha Nela chandrasekhar.koppireddy@gmail.com Chandra Sekhar Koppireddy chandrasekhar.koppireddy@gmail.com <p><em>With the rapid advancement in digital technology, hospitals are becoming “smart” hospitals through the use of connected devices and more advanced analytics to enhance patient care. However, hospitals are subject to significant privacy, security, and compliance concerns when collecting and sharing sensitive and personally identifiable health information with the public due to strict privacy regulations such as HIPAA, GDPR, etc. One approach that might offer some utility to address these issues is “federated learning”. This approach enables hospitals to collaborate on building powerful artificial intelligence models, while never sharing patient data. Instead, each hospital keeps its data safe on-site and only shares updates to the model. In this study, we present a system that uses “federated learning” to enable privacy-focused health monitoring across multiple hospitals. We include techniques to further protect patient data during training and test the system’s performance using simulations that mimic real hospital settings. We also discuss practical challenges and legal considerations for implementing this technology. Our findings show that “federated learning” can help hospitals learn from one another and improve patient care while ensuring privacy and regulatory compliance. </em></p> 2025-09-30T00:00:00+00:00 Copyright (c) 2025 Journal of Hacking Techniques, Digital Crime Prevention and Computer Virology https://matjournals.net/engineering/index.php/JoHTDCPCV/article/view/2886 Artificial Intelligence and Behavioural Analytics: Transforming Threat Detection and Cybercrime Analysis 2025-12-23T09:50:02+00:00 Ritesh Upadhyay upadhyayritesh003@gmail.com Barkha Mehta upadhyayritesh003@gmail.com Jigesh Mehta upadhyayritesh003@gmail.com Krunal Mehta upadhyayritesh003@gmail.com Vishal Shah upadhyayritesh003@gmail.com <p>The rapid expansion of digital systems has led to increasingly complex cyber threats that traditional security methods can no longer manage effectively. As a result, artificial intelligence, particularly machine learning and behavioral analysis, has become essential for advancing threat detection and strengthening cybersecurity defenses. This study explores how AI-driven systems are transforming the identification of unusual patterns, the prediction of malicious actions, and the ability to respond to security incidents proactively. AI techniques that establish baselines of normal user and system behavior, identify recurring patterns, and continuously learn from new data have proven especially valuable for detecting sophisticated and evolving cyberattacks. By adapting over time, these models can uncover hidden anomalies that older, rule-based systems often miss. The research also integrates insights from both cybersecurity and criminology to better understand criminal behavior online. This includes analyzing the motivations, strategies, and habits of cyber offenders, which in turn helps refine AI models for more accurate threat detection. Despite this progress, significant challenges remain. Many AI models lack transparency, making it difficult for security teams to understand how decisions are made. Ethical concerns and privacy risks arise from the large-scale collection and analysis of user data, and adversaries continue to develop methods to deceive or manipulate AI systems. These issues highlight the need for further research, clearer regulatory frameworks, and stronger collaboration between humans and AI to ensure responsible and effective use. Overall, the study demonstrates that AI-enabled behavioral analysis has the potential to greatly enhance digital security, strengthen threat intelligence, and advance the broader field of cybercrime research. By addressing current limitations and promoting responsible deployment, AI can play a pivotal role in protecting modern digital environments.</p> 2025-12-23T00:00:00+00:00 Copyright (c) 2025 Journal of Hacking Techniques, Digital Crime Prevention and Computer Virology https://matjournals.net/engineering/index.php/JoHTDCPCV/article/view/2912 Cybersecurity Threat Modeling for Machine Learning Systems: An Asset-centric Approach with Trust Boundaries and Ownership Roles 2025-12-25T09:08:53+00:00 Sunil Vijaya Kumar Gaddam dr.sunilvkg.cse@rgmcet.edu.in Samunnisa samunnisacseds@rgmcet.edu.in P. Sreedevi pogulasreedevi.cse@rgmcet.edu.in <p>Cybersecurity systems increasingly integrate machine learning (ML) models, yet threat modeling practices lag in addressing ML-specific vulnerabilities and operational complexities. This study proposes a comprehensive, standardized framework for documenting cybersecurity assets with essential fields reflecting trust boundaries and ownership responsibilities. The framework facilitates rigorous threat identification, supports cloud adoption, and enhances accountability through dual roles of owners and custodians. An implementation on an ML-powered intrusion detection prototype demonstrated a 35% reduction in threat identification time and a 33% improvement in security coverage compared to baseline documentation. Our findings indicate practicality and scalability for both academic research and industry applications, advancing the state-of-the-art in ML cybersecurity governance.</p> 2025-12-25T00:00:00+00:00 Copyright (c) 2025 Journal of Hacking Techniques, Digital Crime Prevention and Computer Virology https://matjournals.net/engineering/index.php/JoHTDCPCV/article/view/2915 A Dual-Model AI System for Linux Malware Detection Using Static ELF Analysis and Network Flow Behavior 2025-12-25T18:07:50+00:00 Nirmal L. R. mu8190@srmist.edu.i Manobharathi U. R. mu8190@srmist.edu.i Lakshmi Narayana S. mu8190@srmist.edu.in S. Lakshmanaprakash lakshmanaprakash.s@ist.srmtrichy.edu.in <p><em>This study introduces a two-part deep learning setup designed to catch malware in Linux systems and IoT network flows. One model examines file structures before execution, while the other watches live network activity to spot suspicious patterns. Because they work together—using insights from code layout along with traffic timing—they respond faster to unknown threats. After fixing problems like overlapping training data and uneven sample counts, both parts now handle new examples more reliably. Results show stronger accuracy across different devices, making it practical for real-world use. We built two custom models: first, a bidirectional LSTM plus attention setup that sorts network traffic—hit 88% precision on 13 million pieces from the IoT-23 collection using split-by-group sampling and strict time splits, ran it for 19 rounds; second, a mix of convolutional layers stacked with LSTM, also including attention, focused on fixed binary checks—reached 66% correct calls across 1,815 Linux ELF files once fully retrained, went through 45 cycles. It runs across multiple GPUs at once, uses half-and-full precision during learning, sharpens results via focal loss tuning, groups data by type—all aimed at tackling extreme label skew (nearly 89% bad, just under 11% clean examples). Outcomes—Around 88% accuracy showed up after tough retraining using clear time splits to block data leaks, backed by similar F1 and precision numbers. Instead, the fixed setup hit just 66%, which lines up with its narrow sample pool—only 1,815 yes-or-no cases limiting variety. Across tests, results matched those starting points pretty well. Each system ran on an NVIDIA RTX 3060 Ti, making smart use of RAM when learning. Better scores? Still possible—if we grab more ELF files or fine-tunes settings longer. So, here is the deal—our two-part system works well at spotting malware, especially on Linux and IoT devices. Instead of just one method, we used a combo that leans heavily on the network side, which does better thanks to the huge IoT-23 set with around 13 million entries. On the flip side, the static part hits 66%, limited by less data, but still useful in real use cases. We also fixed leaks and skewed labels that were made early. The numbers look too good, so now it is more honest. That cleanup sets a tighter bar for how these security models should train. </em></p> 2025-12-25T00:00:00+00:00 Copyright (c) 2025 Journal of Hacking Techniques, Digital Crime Prevention and Computer Virology