Learning Spatio-Temporal Feature Representations for Video-Based Gaze Estimation
By: Alexandre Personnic, Mihai Bâce
Video-based gaze estimation methods aim to capture the inherently temporal dynamics of human eye gaze from multiple image frames. However, since models must capture both spatial and temporal relationships, performance is limited by the feature representations within a frame but also between multiple frames. We propose the Spatio-Temporal Gaze Network (ST-Gaze), a model that combines a CNN backbone with dedicated channel attention and self-attention modules to fuse eye and face features optimally. The fused features are then treated as a spatial sequence, allowing for the capture of an intra-frame context, which is then propagated through time to model inter-frame dynamics. We evaluated our method on the EVE dataset and show that ST-Gaze achieves state-of-the-art performance both with and without person-specific adaptation. Additionally, our ablation study provides further insights into the model performance, showing that preserving and modelling intra-frame spatial context with our spatio-temporal recurrence is fundamentally superior to premature spatial pooling. As such, our results pave the way towards more robust video-based gaze estimation using commonly available cameras.
Similar Papers
Eyes on Target: Gaze-Aware Object Detection in Egocentric Video
CV and Pattern Recognition
Helps computers see what people are looking at.
StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos
CV and Pattern Recognition
Teaches computers to understand what you're looking at.
STARE: Predicting Decision Making Based on Spatio-Temporal Eye Movements
Neural and Evolutionary Computing
Predicts what you'll buy by watching your eyes.