GPU accelerated surface-based gaze mapping for XR experiences
By: Charles Javerliat, Guillaume Lavoué
Extended reality is a fast-growing domain for which there is an increasing need to analyze and understand user behavior. In particular, understanding human visual attention during immersive experiences is crucial for many applications. The visualization and analysis of visual attention are commonly done by building fixation density maps from eye-tracking data. Such visual attention mapping is well mastered for 3 degrees of freedom (3DoF) experiences (\textit{i.e.}, involving 360 images or videos) but much less so for 6DoFs data, when the user can move freely in the 3D space. In that case, the visual attention information has to be mapped onto the 3D objects themselves. Some solutions exist for constructing such surface-based 6DoFs attention maps, however, they own several drawbacks: processing time, strong dependence on mesh resolution and/or texture mapping, and/or unpractical data representation for further processing. In this context, we propose a novel GPU-based algorithm that resolves the issues above while being generated in interactive time and rendered in real-time. Experiment on a challenging scene demonstrates the accuracy and robustness of our approach. To stimulate research in this area, the source code is publicly released and integrated into PLUME for ease of use in XR experiments.
Similar Papers
Towards Effective Human Performance in XR Space Framework based on Real-time Eye Tracking Biofeedback
Human-Computer Interaction
Makes virtual worlds react to your eyes.
Deep Learning-Based Visual Fatigue Detection Using Eye Gaze Patterns in VR
Human-Computer Interaction
Spots tired eyes in VR games.
GazeTrack: High-Precision Eye Tracking Based on Regularization and Spatial Computing
CV and Pattern Recognition
Makes virtual reality eyes track more accurately.