Score: 0

KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs

Published: April 9, 2025 | arXiv ID: 2504.07087v1

By: Elan Markowitz , Krupa Galiya , Greg Ver Steeg and more

Potential Business Impact:

Helps computers learn facts better from text.

Business Areas:
Text Analytics Data and Analytics, Software

Knowledge graphs have emerged as a popular method for injecting up-to-date, factual knowledge into large language models (LLMs). This is typically achieved by converting the knowledge graph into text that the LLM can process in context. While multiple methods of encoding knowledge graphs have been proposed, the impact of this textualization process on LLM performance remains under-explored. We introduce KG-LLM-Bench, a comprehensive and extensible benchmark spanning five knowledge graph understanding tasks, and evaluate how different encoding strategies affect performance across various base models. Our extensive experiments with seven language models and five textualization strategies provide insights for optimizing LLM performance on KG reasoning tasks.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
24 pages

Category
Computer Science:
Computation and Language