Context-aware, Ante-hoc Explanations of Driving Behaviour
By: Dominik Grundt , Ishan Saxena , Malte Petersen and more
Potential Business Impact:
Helps self-driving cars explain their driving moves.
Autonomous vehicles (AVs) must be both safe and trustworthy to gain social acceptance and become a viable option for everyday public transportation. Explanations about the system behaviour can increase safety and trust in AVs. Unfortunately, explaining the system behaviour of AI-based driving functions is particularly challenging, as decision-making processes are often opaque. The field of Explainability Engineering tackles this challenge by developing explanation models at design time. These models are designed from system design artefacts and stakeholder needs to develop correct and good explanations. To support this field, we propose an approach that enables context-aware, ante-hoc explanations of (un)expectable driving manoeuvres at runtime. The visual yet formal language Traffic Sequence Charts is used to formalise explanation contexts, as well as corresponding (un)expectable driving manoeuvres. A dedicated runtime monitoring enables context-recognition and ante-hoc presentation of explanations at runtime. In combination, we aim to support the bridging of correct and good explanations. Our method is demonstrated in a simulated overtaking.
Similar Papers
Explaining Autonomous Vehicles with Intention-aware Policy Graphs
Artificial Intelligence
Explains why self-driving cars make choices.
Temporal Counterfactual Explanations of Behaviour Tree Decisions
Robotics
Helps robots explain why they do things.
Towards Safer and Understandable Driver Intention Prediction
CV and Pattern Recognition
Helps self-driving cars understand driver intentions.