Faithfulness metric fusion: Improving the evaluation of LLM trustworthiness across domains
By: Ben Malin, Tatiana Kalganova, Nikolaos Boulgouris
Potential Business Impact:
Makes AI answers more truthful and trustworthy.
We present a methodology for improving the accuracy of faithfulness evaluation in Large Language Models (LLMs). The proposed methodology is based on the combination of elementary faithfulness metrics into a combined (fused) metric, for the purpose of improving the faithfulness of LLM outputs. The proposed strategy for metric fusion deploys a tree-based model to identify the importance of each metric, which is driven by the integration of human judgements evaluating the faithfulness of LLM responses. This fused metric is demonstrated to correlate more strongly with human judgements across all tested domains for faithfulness. Improving the ability to evaluate the faithfulness of LLMs, allows for greater confidence to be placed within models, allowing for their implementation in a greater diversity of scenarios. Additionally, we homogenise a collection of datasets across question answering and dialogue-based domains and implement human judgements and LLM responses within this dataset, allowing for the reproduction and trialling of faithfulness evaluation across domains.
Similar Papers
Walk the Talk? Measuring the Faithfulness of Large Language Model Explanations
Computation and Language
Checks if AI's answers are honest.
Towards Transparent Reasoning: What Drives Faithfulness in Large Language Models?
Computation and Language
Makes AI give honest reasons for its answers.
Can LLMs Faithfully Explain Themselves in Low-Resource Languages? A Case Study on Emotion Detection in Persian
Computation and Language
Makes AI explain its thoughts more honestly.