Score: 0

Semantic Refinement with LLMs for Graph Representations

Published: December 24, 2025 | arXiv ID: 2512.21106v1

By: Safal Thapaliya , Zehong Wang , Jiazheng Li and more

Potential Business Impact:

Helps computers learn from messy, mixed-up information.

Business Areas:
Semantic Web Internet Services

Graph-structured data exhibit substantial heterogeneity in where their predictive signals originate: in some domains, node-level semantics dominate, while in others, structural patterns play a central role. This structure-semantics heterogeneity implies that no graph learning model with a fixed inductive bias can generalize optimally across diverse graph domains. However, most existing methods address this challenge from the model side by incrementally injecting new inductive biases, which remains fundamentally limited given the open-ended diversity of real-world graphs. In this work, we take a data-centric perspective and treat node semantics as a task-adaptive variable. We propose a Data-Adaptive Semantic Refinement framework DAS for graph representation learning, which couples a fixed graph neural network (GNN) and a large language model (LLM) in a closed feedback loop. The GNN provides implicit supervisory signals to guide the semantic refinement of LLM, and the refined semantics are fed back to update the same graph learner. We evaluate our approach on both text-rich and text-free graphs. Results show consistent improvements on structure-dominated graphs while remaining competitive on semantics-rich graphs, demonstrating the effectiveness of data-centric semantic adaptation under structure-semantics heterogeneity.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Computer Science:
Computation and Language