Less is More: Learning Graph Tasks with Just LLMs
By: Sola Shirai , Kavitha Srinivas , Julian Dolby and more
Potential Business Impact:
Computers learn to solve problems using connected ideas.
For large language models (LLMs), reasoning over graphs could help solve many problems. Prior work has tried to improve LLM graph reasoning by examining how best to serialize graphs as text and by combining GNNs and LLMs. However, the merits of such approaches remain unclear, so we empirically answer the following research questions: (1) Can LLMs learn to solve fundamental graph tasks without specialized graph encoding models?, (2) Can LLMs generalize learned solutions to unseen graph structures or tasks?, and (3) What are the merits of competing approaches to learn graph tasks? We show that even small LLMs can learn to solve graph tasks by training them with instructive chain-of-thought solutions, and this training generalizes, without specialized graph encoders, to new tasks and graph structures.
Similar Papers
When Structure Doesn't Help: LLMs Do Not Read Text-Attributed Graphs as Effectively as We Expected
Machine Learning (CS)
Computers understand complex connections without needing extra rules.
Actions Speak Louder than Prompts: A Large-Scale Study of LLMs for Graph Inference
Computation and Language
Computers learn from connected information better.
Learn to Think: Bootstrapping LLM Reasoning Capability Through Graph Representation Learning
Machine Learning (CS)
Helps computers solve hard problems by thinking step-by-step.