Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs
By: Zhiyuan Hu , Yucheng Wang , Yufei He and more
Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@$k$ across large sampling budgets and increases the area under the pass@$k$ curve (AUC@$K$) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.
Similar Papers
Whatever Remains Must Be True: Filtering Drives Reasoning in LLMs, Shaping Diversity
Machine Learning (CS)
Makes AI smarter without losing creative ideas.
Outcome-based Exploration for LLM Reasoning
Machine Learning (CS)
Makes AI smarter and more creative.
Part I: Tricks or Traps? A Deep Dive into RL for LLM Reasoning
Machine Learning (CS)
Makes smart computer programs learn better and faster.