Gaze-VLM:Bridging Gaze and VLMs through Attention Regularization for Egocentric Understanding
By: Anupam Pani, Yanchao Yang
Potential Business Impact:
Makes computers understand what you're looking at.
Eye gaze offers valuable cues about attention, short-term intent, and future actions, making it a powerful signal for modeling egocentric behavior. In this work, we propose a gaze-regularized framework that enhances VLMs for two key egocentric understanding tasks: fine-grained future event prediction and current activity understanding. Unlike prior approaches that rely solely on visual inputs or use gaze as an auxiliary input signal , our method uses gaze only during training. We introduce a gaze-regularized attention mechanism that aligns model focus with human visual gaze. This design is flexible and modular, allowing it to generalize across multiple VLM architectures that utilize attention. Experimental results show that our approach improves semantic prediction scores by up to 11 for future event prediction and around 7 for current activity understanding, compared to the corresponding baseline models trained without gaze regularization. These results highlight the value of gaze-guided training in improving the accuracy and robustness of egocentric VLMs. Overall, this work establishes a foundation for using human gaze to enhance the predictive capabilities of VLMs in real-world scenarios like assistive robots and human-machine collaboration. Code and additional information is available at: https://github.com/anupampani/Gaze-VLM
Similar Papers
GazeVLM: A Vision-Language Model for Multi-Task Gaze Understanding
CV and Pattern Recognition
Helps computers understand where people are looking.
Eye Gaze Tells You Where to Compute: Gaze-Driven Efficient VLMs
CV and Pattern Recognition
Makes smart glasses understand things faster.
From Gaze to Insight: Bridging Human Visual Attention and Vision Language Model Explanation for Weakly-Supervised Medical Image Segmentation
CV and Pattern Recognition
Helps doctors find sickness in scans faster.