Ontology Learning and Knowledge Graph Construction: A Comparison of Approaches and Their Impact on RAG Performance
By: Tiago da Cruz, Bernardo Tavares, Francisco Belo
Potential Business Impact:
Makes AI smarter by using organized facts.
Retrieval-Augmented Generation (RAG) systems combine Large Language Models (LLMs) with external knowledge, and their performance depends heavily on how that knowledge is represented. This study investigates how different Knowledge Graph (KG) construction strategies influence RAG performance. We compare a variety of approaches: standard vector-based RAG, GraphRAG, and retrieval over KGs built from ontologies derived either from relational databases or textual corpora. Results show that ontology-guided KGs incorporating chunk information achieve competitive performance with state-of-the-art frameworks, substantially outperforming vector retrieval baselines. Moreover, the findings reveal that ontology-guided KGs built from relational databases perform competitively to ones built with ontologies extracted from text, with the benefit of offering a dual advantage: they require a one-time-only ontology learning process, substantially reducing LLM usage costs; and avoid the complexity of ontology merging inherent to text-based approaches.
Similar Papers
Aligning LLMs for the Classroom with Knowledge-Based Retrieval -- A Comparative RAG Study
Artificial Intelligence
Makes AI answers for school more truthful.
Graph-based Approaches and Functionalities in Retrieval-Augmented Generation: A Comprehensive Survey
Information Retrieval
Helps computers answer questions using real-world facts.
A Survey of Graph Retrieval-Augmented Generation for Customized Large Language Models
Computation and Language
Helps computers understand complex topics better.