Multimodal Behavioral Patterns Analysis with Eye-Tracking and LLM-Based Reasoning
By: Dongyang Guo , Yasmeen Abdrabou , Enkeleda Thaqi and more
Potential Business Impact:
Helps computers understand how people look at things.
Eye-tracking data reveals valuable insights into users' cognitive states but is difficult to analyze due to its structured, non-linguistic nature. While large language models (LLMs) excel at reasoning over text, they struggle with temporal and numerical data. This paper presents a multimodal human-AI collaborative framework designed to enhance cognitive pattern extraction from eye-tracking signals. The framework includes: (1) a multi-stage pipeline using horizontal and vertical segmentation alongside LLM reasoning to uncover latent gaze patterns; (2) an Expert-Model Co-Scoring Module that integrates expert judgment with LLM output to generate trust scores for behavioral interpretations; and (3) a hybrid anomaly detection module combining LSTM-based temporal modeling with LLM-driven semantic analysis. Our results across several LLMs and prompt strategies show improvements in consistency, interpretability, and performance, with up to 50% accuracy in difficulty prediction tasks. This approach offers a scalable, interpretable solution for cognitive modeling and has broad potential in adaptive learning, human-computer interaction, and educational analytics.
Similar Papers
GazeLLM: Multimodal LLMs incorporating Human Visual Attention
Human-Computer Interaction
Lets computers understand videos by watching eyes.
Towards Attention-Aware Large Language Models: Integrating Real-Time Eye-Tracking and EEG for Adaptive AI Responses
Human-Computer Interaction
Helps computers know when you're not paying attention.
To See or To Read: User Behavior Reasoning in Multimodal LLMs
Artificial Intelligence
Pictures help computers guess what you'll buy next.