AI-Driven Gesture Recognition Framework for Enhancing Human-Computer Interaction
Keywords:
Artificial intelligence, CNN, Embedded system, Gesture recognition, Human-Computer Interaction (HCI), Image processing, OpenCV, Touchless controlAbstract
Gesture recognition has become a core aspect of contemporary Human-Computer Interaction (HCI), enabling intuitive and natural communication between humans and machines. With the rapid growth of Artificial Intelligence (AI), particularly machine learning and deep learning, gesture recognition systems have progressed far beyond simple rule-based frameworks to highly intelligent models capable of real-time interpretation of complex gestures. AI integration allows systems to train on vast datasets, adapt to diverse user behaviors, and consistently improve recognition performance. This paper examines the role of AI in gesture recognition, analyzing the technologies, architectures, and algorithms that power these systems. Both vision-based and sensor-based strategies are discussed, with emphasis on preprocessing, feature extraction, and classification methods. Experimental findings indicate that AI-driven models achieve impressive accuracy across varied gesture datasets. Real-world applications in healthcare, robotics, gaming, and smart home environments further highlight their significance. Despite notable advancements, issues such as lighting variability, occlusions, and high computational demand persist. This study provides an in-depth overview of AI-powered gesture recognition and outlines future research directions aimed at enhancing scalability, reliability, and accessibility. The results suggest that AI-enabled gesture recognition is reshaping HCI, supporting seamless and contact-free interaction across multiple industries.