Predicting Multitasking in Manual and Automated Driving with Optimal Supervisory Control
By: Jussi Jokinen, Patrick Ebel, Tuomo Kujala
Potential Business Impact:
Helps cars know when drivers get distracted.
Modern driving involves interactive technologies that can divert attention, increasing the risk of accidents. This paper presents a computational cognitive model that simulates human multitasking while driving. Based on optimal supervisory control theory, the model predicts how multitasking adapts to variations in driving demands, interactive tasks, and automation levels. Unlike previous models, it accounts for context-dependent multitasking across different degrees of driving automation. The model predicts longer in-car glances on straight roads and shorter glances during curves. It also anticipates increased glance durations with driver aids such as lane-centering assistance and their interaction with environmental demands. Validated against two empirical datasets, the model offers insights into driver multitasking amid evolving in-car technologies and automation.
Similar Papers
Research on Driving Scenario Technology Based on Multimodal Large Lauguage Model Optimization
CV and Pattern Recognition
Helps self-driving cars see and react better.
Driver Assistant: Persuading Drivers to Adjust Secondary Tasks Using Large Language Models
Human-Computer Interaction
Helps drivers stay focused while cars drive themselves.
Research on a Driver's Perceived Risk Prediction Model Considering Traffic Scene Interaction
Human-Computer Interaction
Makes self-driving cars safer by predicting danger.