Modular Deep Learning Framework for Assistive Perception: Gaze, Affect, and Speaker Identification
By: Akshit Pramod Anchan, Jewelith Thomas, Sritama Roy
Potential Business Impact:
Helps computers see and hear to understand you.
Developing comprehensive assistive technologies requires the seamless integration of visual and auditory perception. This research evaluates the feasibility of a modular architecture inspired by core functionalities of perceptive systems like 'Smart Eye.' We propose and benchmark three independent sensing modules: a Convolutional Neural Network (CNN) for eye state detection (drowsiness/attention), a deep CNN for facial expression recognition, and a Long Short-Term Memory (LSTM) network for voice-based speaker identification. Utilizing the Eyes Image, FER2013, and customized audio datasets, our models achieved accuracies of 93.0%, 97.8%, and 96.89%, respectively. This study demonstrates that lightweight, domain-specific models can achieve high fidelity on discrete tasks, establishing a validated foundation for future real-time, multimodal integration in resource-constrained assistive devices.
Similar Papers
Multimodal Behavioral Patterns Analysis with Eye-Tracking and LLM-Based Reasoning
Human-Computer Interaction
Helps computers understand how people look at things.
VocalEyes: Enhancing Environmental Perception for the Visually Impaired through Vision-Language Models and Distance-Aware Object Detection
Human-Computer Interaction
Helps blind people "see" by describing surroundings.
CS3D: An Efficient Facial Expression Recognition via Event Vision
CV and Pattern Recognition
Helps robots understand your face better, using less power.