Score: 2

AbstRaL: Augmenting LLMs' Reasoning by Reinforcing Abstract Thinking

Published: June 9, 2025 | arXiv ID: 2506.07751v2

By: Silin Gao , Antoine Bosselut , Samy Bengio and more

Potential Business Impact:

Teaches computers to think smarter, not just memorize.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent studies have shown that large language models (LLMs), especially smaller ones, often lack robustness in their reasoning. I.e., they tend to experience performance drops when faced with distribution shifts, such as changes to numerical or nominal variables, or insertions of distracting clauses. A possible strategy to address this involves generating synthetic data to further "instantiate" reasoning problems on potential variations. In contrast, our approach focuses on "abstracting" reasoning problems. This not only helps counteract distribution shifts but also facilitates the connection to symbolic tools for deriving solutions. We find that this abstraction process is better acquired through reinforcement learning (RL) than just supervised fine-tuning, which often fails to produce faithful abstractions. Our method, AbstRaL -- which promotes abstract reasoning in LLMs using RL on granular abstraction data -- significantly mitigates performance degradation on recent GSM perturbation benchmarks.


Page Count
23 pages

Category
Computer Science:
Computation and Language