Lost in Serialization: Invariance and Generalization of LLM Graph Reasoners
By: Daniel Herbst , Lea Karbeska , Divyanshu Kumar and more
Potential Business Impact:
Makes AI understand pictures better, no matter how drawn.
While promising, graph reasoners based on Large Language Models (LLMs) lack built-in invariance to symmetries in graph representations. Operating on sequential graph serializations, LLMs can produce different outputs under node reindexing, edge reordering, or formatting changes, raising robustness concerns. We systematically analyze these effects, studying how fine-tuning impacts encoding sensitivity as well generalization on unseen tasks. We propose a principled decomposition of graph serializations into node labeling, edge encoding, and syntax, and evaluate LLM robustness to variations of each of these factors on a comprehensive benchmarking suite. We also contribute a novel set of spectral tasks to further assess generalization abilities of fine-tuned reasoners. Results show that larger (non-fine-tuned) models are more robust. Fine-tuning reduces sensitivity to node relabeling but may increase it to variations in structure and format, while it does not consistently improve performance on unseen tasks.
Similar Papers
Reasoning Models Reason Well, Until They Don't
Artificial Intelligence
Makes smart computers better at solving hard problems.
Less is More: Learning Graph Tasks with Just LLMs
Machine Learning (CS)
Computers learn to solve problems using connected ideas.
Do Larger Language Models Imply Better Generalization? A Pretraining Scaling Law for Implicit Reasoning
Artificial Intelligence
Makes AI better at solving puzzles with lots of steps.