Automated Factual Benchmarking for In-Car Conversational Systems using Large Language Models
By: Rafael Giebisch , Ken E. Friedl , Lev Sorokin and more
Potential Business Impact:
Makes car talking computers tell the truth.
In-car conversational systems bring the promise to improve the in-vehicle user experience. Modern conversational systems are based on Large Language Models (LLMs), which makes them prone to errors such as hallucinations, i.e., inaccurate, fictitious, and therefore factually incorrect information. In this paper, we present an LLM-based methodology for the automatic factual benchmarking of in-car conversational systems. We instantiate our methodology with five LLM-based methods, leveraging ensembling techniques and diverse personae to enhance agreement and minimize hallucinations. We use our methodology to evaluate CarExpert, an in-car retrieval-augmented conversational question answering system, with respect to the factual correctness to a vehicle's manual. We produced a novel dataset specifically created for the in-car domain, and tested our methodology against an expert evaluation. Our results show that the combination of GPT-4 with the Input Output Prompting achieves over 90 per cent factual correctness agreement rate with expert evaluations, other than being the most efficient approach yielding an average response time of 4.5s. Our findings suggest that LLM-based testing constitutes a viable approach for the validation of conversational systems regarding their factual correctness.
Similar Papers
Benchmarking Contextual Understanding for In-Car Conversational Systems
Computation and Language
Tests car voice assistants for better answers.
Multi-Modal Fact-Verification Framework for Reducing Hallucinations in Large Language Models
Artificial Intelligence
Fixes AI lies to make it more truthful.
GOFAI meets Generative AI: Development of Expert Systems by means of Large Language Models
Artificial Intelligence
Makes AI more truthful and trustworthy.