Score: 0

Modular Deep Learning Framework for Assistive Perception: Gaze, Affect, and Speaker Identification

Published: November 25, 2025 | arXiv ID: 2511.20474v1

By: Akshit Pramod Anchan, Jewelith Thomas, Sritama Roy

Potential Business Impact:

Helps computers see and hear to understand you.

Business Areas:
Image Recognition Data and Analytics, Software

Developing comprehensive assistive technologies requires the seamless integration of visual and auditory perception. This research evaluates the feasibility of a modular architecture inspired by core functionalities of perceptive systems like 'Smart Eye.' We propose and benchmark three independent sensing modules: a Convolutional Neural Network (CNN) for eye state detection (drowsiness/attention), a deep CNN for facial expression recognition, and a Long Short-Term Memory (LSTM) network for voice-based speaker identification. Utilizing the Eyes Image, FER2013, and customized audio datasets, our models achieved accuracies of 93.0%, 97.8%, and 96.89%, respectively. This study demonstrates that lightweight, domain-specific models can achieve high fidelity on discrete tasks, establishing a validated foundation for future real-time, multimodal integration in resource-constrained assistive devices.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition