Score: 1

Less is More: Learning Graph Tasks with Just LLMs

Published: August 13, 2025 | arXiv ID: 2508.10115v1

By: Sola Shirai , Kavitha Srinivas , Julian Dolby and more

BigTech Affiliations: IBM

Potential Business Impact:

Computers learn to solve problems using connected ideas.

For large language models (LLMs), reasoning over graphs could help solve many problems. Prior work has tried to improve LLM graph reasoning by examining how best to serialize graphs as text and by combining GNNs and LLMs. However, the merits of such approaches remain unclear, so we empirically answer the following research questions: (1) Can LLMs learn to solve fundamental graph tasks without specialized graph encoding models?, (2) Can LLMs generalize learned solutions to unseen graph structures or tasks?, and (3) What are the merits of competing approaches to learn graph tasks? We show that even small LLMs can learn to solve graph tasks by training them with instructive chain-of-thought solutions, and this training generalizes, without specialized graph encoders, to new tasks and graph structures.

Country of Origin
🇺🇸 United States

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)