When Structure Doesn't Help: LLMs Do Not Read Text-Attributed Graphs as Effectively as We Expected
By: Haotian Xu, Yuning You, Tengfei Ma
Potential Business Impact:
Computers understand complex connections without needing extra rules.
Graphs provide a unified representation of semantic content and relational structure, making them a natural fit for domains such as molecular modeling, citation networks, and social graphs. Meanwhile, large language models (LLMs) have excelled at understanding natural language and integrating cross-modal signals, sparking interest in their potential for graph reasoning. Recent work has explored this by either designing template-based graph templates or using graph neural networks (GNNs) to encode structural information. In this study, we investigate how different strategies for encoding graph structure affect LLM performance on text-attributed graphs. Surprisingly, our systematic experiments reveal that: (i) LLMs leveraging only node textual descriptions already achieve strong performance across tasks; and (ii) most structural encoding strategies offer marginal or even negative gains. We show that explicit structural priors are often unnecessary and, in some cases, counterproductive when powerful language models are involved. This represents a significant departure from traditional graph learning paradigms and highlights the need to rethink how structure should be represented and utilized in the LLM era. Our study is to systematically challenge the foundational assumption that structure is inherently beneficial for LLM-based graph reasoning, opening the door to new, semantics-driven approaches for graph learning.
Similar Papers
Less is More: Learning Graph Tasks with Just LLMs
Machine Learning (CS)
Computers learn to solve problems using connected ideas.
Actions Speak Louder than Prompts: A Large-Scale Study of LLMs for Graph Inference
Computation and Language
Computers learn from connected information better.
Large Language Models Meet Text-Attributed Graphs: A Survey of Integration Frameworks and Applications
Computation and Language
Helps computers understand and reason better.