Knowledge-Graph Based RAG System Evaluation Framework
By: Sicheng Dong , Vahid Zolfaghari , Nenad Petrovic and more
Potential Business Impact:
Tests AI writing better by checking its thinking.
Large language models (LLMs) has become a significant research focus and is utilized in various fields, such as text generation and dialog systems. One of the most essential applications of LLM is Retrieval Augmented Generation (RAG), which greatly enhances generated content's reliability and relevance. However, evaluating RAG systems remains a challenging task. Traditional evaluation metrics struggle to effectively capture the key features of modern LLM-generated content that often exhibits high fluency and naturalness. Inspired by the RAGAS tool, a well-known RAG evaluation framework, we extended this framework into a KG-based evaluation paradigm, enabling multi-hop reasoning and semantic community clustering to derive more comprehensive scoring metrics. By incorporating these comprehensive evaluation criteria, we gain a deeper understanding of RAG systems and a more nuanced perspective on their performance. To validate the effectiveness of our approach, we compare its performance with RAGAS scores and construct a human-annotated subset to assess the correlation between human judgments and automated metrics. In addition, we conduct targeted experiments to demonstrate that our KG-based evaluation method is more sensitive to subtle semantic differences in generated outputs. Finally, we discuss the key challenges in evaluating RAG systems and highlight potential directions for future research.
Similar Papers
Retrieval Augmented Generation Evaluation in the Era of Large Language Models: A Comprehensive Survey
Computation and Language
Tests how AI uses outside facts to answer questions.
When Retrieval Succeeds and Fails: Rethinking Retrieval-Augmented Generation for LLMs
Computation and Language
Helps smart computers learn new things faster.
A Knowledge Graph and a Tripartite Evaluation Framework Make Retrieval-Augmented Generation Scalable and Transparent
Information Retrieval
Chatbots answer questions more accurately and reliably.