Benchmarking Contextual Understanding for In-Car Conversational Systems
By: Philipp Habicht , Lev Sorokin , Abdullah Saydemir and more
Potential Business Impact:
Tests car voice assistants for better answers.
In-Car Conversational Question Answering (ConvQA) systems significantly enhance user experience by enabling seamless voice interactions. However, assessing their accuracy and reliability remains a challenge. This paper explores the use of Large Language Models (LLMs) alongside advanced prompting techniques and agent-based methods to evaluate the extent to which ConvQA system responses adhere to user utterances. The focus lies on contextual understanding and the ability to provide accurate venue recommendations considering user constraints and situational context. To evaluate utterance-response coherence using an LLM, we synthetically generate user utterances accompanied by correct and modified failure-containing system responses. We use input-output, chain-of-thought, self-consistency prompting, and multi-agent prompting techniques with 13 reasoning and non-reasoning LLMs of varying sizes and providers, including OpenAI, DeepSeek, Mistral AI, and Meta. We evaluate our approach on a case study involving restaurant recommendations. The most substantial improvements occur for small non-reasoning models when applying advanced prompting techniques, particularly multi-agent prompting. However, reasoning models consistently outperform non-reasoning models, with the best performance achieved using single-agent prompting with self-consistency. Notably, DeepSeek-R1 reaches an F1-score of 0.99 at a cost of 0.002 USD per request. Overall, the best balance between effectiveness and cost-time efficiency is reached with the non-reasoning model DeepSeek-V3. Our findings show that LLM-based evaluation offers a scalable and accurate alternative to traditional human evaluation for benchmarking contextual understanding in ConvQA systems.
Similar Papers
Automated Factual Benchmarking for In-Car Conversational Systems using Large Language Models
Computation and Language
Makes car talking computers tell the truth.
Incorporating Contextual Paralinguistic Understanding in Large Speech-Language Models
Computation and Language
Teaches computers to understand feelings in voices.
Adaptive Multi-Agent Response Refinement in Conversational Systems
Computation and Language
Makes chatbots smarter by checking facts and you.