Investigating the Effect of Encumbrance on Gaze- and Touch-based Target Acquisition on Handheld Mobile Devices
By: Omar Namnakani , Yasmeen Abdrabou , John H. Williamson and more
The potential of using gaze as an input modality in the mobile context is growing. While users often encumber themselves by carrying objects and using mobile devices while walking, the impact of encumbrance on gaze input performance remains unexplored. To investigate this, we conducted a user study (N=24) to evaluate the effect of encumbrance on the performance of 1) Gaze using Dwell time (with/without visual feedback), 2) GazeTouch (with/without visual feedback), and 3) One- or two-hand touch input. While Touch generally performed better, Gaze, especially with feedback, showed a consistent performance regardless of whether participants were encumbered or unencumbered. Participants' preferences for input modalities varied with encumbrance: they preferred Gaze when encumbered, and touch when unencumbered. Our findings enhance understanding of the effect of encumbrance on gaze input and contribute towards selecting appropriate input modalities in future mobile user interfaces to account for situational impairments.
Similar Papers
GazeBlend: Exploring Paired Gaze-Based Input Techniques for Navigation and Selection Tasks on Mobile Devices
Human-Computer Interaction
Lets you control phones with your eyes better.
Does Embodiment Matter to Biomechanics and Function? A Comparative Analysis of Head-Mounted and Hand-Held Assistive Devices for Individuals with Blindness and Low Vision
Human-Computer Interaction
Helps blind people use technology better.
Interactions par franchissement grâce a un système de suivi du regard
Human-Computer Interaction
Lets you control computers faster with your eyes.