GazeBlend: Exploring Paired Gaze-Based Input Techniques for Navigation and Selection Tasks on Mobile Devices
By: Omar Namnakani , Yasmeen Abdrabou , Jonathan Grizou and more
The potential of gaze for hands-free mobile interaction is increasingly evident. While each gaze input technique presents distinct advantages and limitations, a combination can amplify strengths and mitigate challenges. We report on the results of a user study (N=24), in which we compared the usability and performance of pairing three popular gaze input techniques: Dwell Time, Pursuits, and Gaze Gestures, for navigation and selection tasks while sitting and walking. Results show that pairing gestures for navigation with either Dwell time or Pursuits for selection improves task completion time and rate compared to using either individually. We discuss the implications of pairing gaze input techniques, such as how Pursuits may negatively impact other techniques, likely due to the visual clutter it adds, how integrating gestures for navigation reduces the chances of unintentional selections, and the impact of motor activity on performance. Our findings provide insights for effective gaze-enabled interfaces.
Similar Papers
Interactions par franchissement grâce a un système de suivi du regard
Human-Computer Interaction
Lets you control computers faster with your eyes.
Gaze-Hand Steering for Travel and Multitasking in Virtual Environments
Human-Computer Interaction
Lets you control virtual worlds with eyes and hands.
Exploring the Feasibility of Gaze-Based Navigation Across Path Types
Human-Computer Interaction
Lets you control virtual worlds by looking.