International Journal of Neural Systems and Applications
https://matjournals.net/engineering/index.php/IJNSA
en-USInternational Journal of Neural Systems and ApplicationsAI-Driven Gesture Recognition using Wi-Fi Signals for Enhancing Women’s Safety
https://matjournals.net/engineering/index.php/IJNSA/article/view/2974
<p><em>Ensuring women’s safety in both private and public environments remains a critical societal concern, particularly in situations where traditional safety mechanisms such as mobile applications, panic buttons, or wearable devices are inaccessible or impractical. This paper presents an AI-driven gesture recognition framework that leverages Wi-Fi Channel State Information (CSI) to detect distress gestures in a non-intrusive, contactless, and privacy-preserving manner. Unlike camera-based surveillance systems, the proposed approach does not capture visual data, thereby avoiding privacy violations and maintaining functionality in low-light or occluded environments. The proposed system exploits the fact that human motion alters the amplitude and phase of Wi-Fi signals propagating through indoor environments. These subtle variations are extracted as CSI features and analyzed using a hybrid deep learning architecture combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. The CNN component captures spatial patterns from CSI spectrograms, while the LSTM models temporal dependencies associated with dynamic gesture movements. This combination enables effective discrimination between distress-related gestures and routine human activities. A custom dataset comprising 2,400 gesture samples was collected in controlled indoor settings under varying distances, obstacles, and non-line-of-sight conditions. Six gesture classes were defined, including distress, panic, cautionary, and neutral gestures. Experimental evaluation demonstrates that the proposed CNN–LSTM model achieves a classification accuracy of 94.2%, outperforming standalone CNN and LSTM models in terms of precision, recall, and F1-score. Robustness analysis further indicates minimal performance degradation in multipath-rich and dynamically changing environments. To enable real-world applicability, the gesture recognition framework is integrated with an IoT-based alert generation module that automatically transmits emergency notifications and location information to predefined contacts upon detecting a distress gesture. The end-to-end system response time remains below 1.5 seconds, making it suitable for real-time deployment. Overall, the study demonstrates the feasibility of Wi-Fi-based AI sensing as a cost-effective, scalable, and privacy-aware solution for women’s safety. The results highlight the potential of leveraging existing wireless infrastructure to provide continuous, unobtrusive protection without relying on user-initiated actions or visual monitoring systems.</em></p>Kartiki Sanjay RepaleGauri Sanjay ChaureSujit MoreHarshada M. Raghuwanshi
Copyright (c) 2026 International Journal of Neural Systems and Applications
2026-01-132026-01-1321111Deep Learning of Electronic Health Records: Opportunities and Challenges
https://matjournals.net/engineering/index.php/IJNSA/article/view/3165
<p><em>Electronic health records (EHRs) have revolutionized the healthcare sector in the sense that they have substituted the majority of the paperwork in the recording of patient records in different aspects, like diagnoses, medication, laboratory results, and clinical notes. The growing EHR information, as well as the creation of the deep learning architecture, have created unprecedented opportunities to use this information to make actionable insights regarding the prediction of diseases, their diagnosis, and individualized treatment recommendations. It is a generalized survey touching the deep learning applications sphere in EHR analysis, which is likely to have revolutionary opportunities in the form of early-stage disease diagnosis, clinical decision-making, and automatic medical records. Simultaneously, critical obstacles such as the heterogeneity of the data, privacy, the lack of interpretability, and complexities in the process of integrating are addressed. Those designs considered are state-of-the-art, including recurrent neural networks (RNNs), long short-term memory (LSTM) networks, convolutional neural networks (CNNs), transformer-based models, and graph neural networks (GNNs) applied to structured and sequential EHR data. New developments in federated learning with privacy protection, explainable artificial intelligence (XAI) with clinical interpretability and multi-modal learning approaches based on imaging, genomics, and clinical text are also highlighted. The paper would provide a comprehensive system of understanding and implementing deep learning solutions into clinical practice for healthcare organizations, researchers, and practitioners and reflect on the ethical, regulatory, and practical factors.</em></p>P. Guna NaimishaP. Surya SriChandra Sekhar Koppireddy
Copyright (c) 2026 International Journal of Neural Systems and Applications
2026-03-022026-03-02211223Computationally Efficient Deep Learning for Real-time Drone Detection in Images: A Review of Models and Deployment Challenges
https://matjournals.net/engineering/index.php/IJNSA/article/view/3479
<p><strong><em>Background: </em></strong><em>The rapid expansion of unmanned aerial vehicles for surveillance, agriculture, and logistics demands robust object detection despite Sim2Real disparities. Models trained in simulation suffer up to 40.5% mAP degradation in real environments due to domain shift, lighting variation, and scale imbalance. Conventional pipelines with multi-sensor latencies exceeding 1000 ms result in a 52.5% increase in false negatives for tiny objects smaller than 32 pixels, as demonstrated on VisDrone. </em></p> <p><strong><em>Purpose: </em></strong><em>This study analyzes the Sim2Real gap in drone detection across 50 architectures, examining performance distributions, stress factors, scale sensitivity, and latency behavior to establish a resilience taxonomy. </em></p> <p><strong><em>Methods: </em></strong><em>Fifty models, including YOLO variants and Faster R-CNN, were evaluated on 50,000 VisDrone-augmented images across four environments. Statistical analyses included regression (r = -0.949), t-tests (p < 0.001), ANOVA (p = 0.044), heatmaps, and Monte Carlo latency simulations (n = 2000). Mitigation strategies involved feature pyramid fusion, attention modules, and pipeline parallelization with bootstrap validation. </em></p> <p><strong><em>Findings: </em></strong><em>A 0.405 mAP drop was observed in Sim2Real, driven mainly by weather and domain shift. Tiny objects lost 52.5% accuracy. Latency ranged from 178 to 1097 ms, dominated by communication and synchronization. Parallelization improved return on investment by 45%. </em></p> <p><strong><em>Novelty and Conclusion: </em></strong><em>A unified four-part taxonomy links domain shift, stressors, scale, and latency, reducing the performance gap by 62%. Multi-scale adaptive ensembles restored 35% fidelity. Edge-hybrid systems under 250 ms and attention-enhanced FPNs are recommended for resilient UAV deployment.</em></p>Belay Sitotaw Goshu
Copyright (c) 2026 International Journal of Neural Systems and Applications
2026-04-242026-04-24212452Explainable Neural Systems: Advancing Transparency in Deep Learning Models
https://matjournals.net/engineering/index.php/IJNSA/article/view/3534
<p><em>This work investigates the growing challenges associated with the opacity of deep learning models, which, despite achieving remarkable success across diverse domains such as healthcare diagnostics, financial forecasting, and autonomous systems, remain inherently difficult to interpret, thereby limiting transparency, trust, and accountability. In addressing these concerns, the study advances the concept of Explainable Neural Systems (ENS) as an emerging and comprehensive paradigm that seeks to balance high predictive performance with meaningful interpretability. To this end, a structured and integrative framework is proposed, which combines post-hoc explanation techniques—such as feature attribution methods and saliency mapping—with inherently interpretable neural architectures, including attention-based models and modular network designs, while also embedding human-centered evaluation strategies that prioritize usability, cognitive alignment, and contextual relevance. Moreover, this work emphasizes that technical explainability alone is insufficient unless it translates into practical interpretability for end-users; therefore, it advocates for iterative evaluation processes that incorporate user feedback to ensure that generated explanations are both accurate and intuitively comprehensible. The experimental findings, derived from multiple benchmark datasets and application scenarios, demonstrate that the proposed ENS framework substantially improves interpretability metrics without incurring a significant degradation in predictive accuracy, thus reinforcing the feasibility of achieving transparency alongside performance. Additionally, the results highlight the critical importance of explainability in high-stakes environments, where opaque decision-making processes can have profound ethical and societal implications. Overall, this study contributes a robust and extensible foundation for advancing explainable artificial intelligence, while also outlining key directions for future research, including the establishment of standardized evaluation protocols, the incorporation of causal inference mechanisms, and the development of adaptive explanation systems capable of responding dynamically to varying user needs and contextual demands.</em></p>Md. Ali
Copyright (c) 2026 International Journal of Neural Systems and Applications
2026-05-112026-05-11215366