Group-Aware Reinforcement Learning for Output Diversity in Large Language Models
By: Oron Anschel , Alon Shoshan , Adam Botach and more
Potential Business Impact:
Makes AI give more different and interesting answers.
Large Language Models (LLMs) often suffer from mode collapse, repeatedly generating the same few completions even when many valid answers exist, limiting their diversity across a wide range of tasks. We introduce Group-Aware Policy Optimization (GAPO), a simple extension of the recent and popular Group Relative Policy Optimization (GRPO) that computes rewards over the group as a whole. GAPO enables learning from the group-level properties such as diversity and coverage. We demonstrate GAPO using a frequency-aware reward function that encourages uniform sampling over valid LLM completions, and show that GAPO-trained models produce valid and more diverse model responses. Beyond this setup, GAPO generalizes to open-ended prompts and improves response diversity without compromising accuracy on standard LLM benchmarks (GSM8K, MATH, HumanEval, MMLU-Pro). Our code will be made publicly available.
Similar Papers
Group Causal Policy Optimization for Post-Training Large Language Models
Machine Learning (CS)
Makes AI better at choosing the best answers.
Information-Consistent Language Model Recommendations through Group Relative Policy Optimization
Machine Learning (CS)
Makes AI give the same answers every time.
Group-in-Group Policy Optimization for LLM Agent Training
Machine Learning (CS)
Helps AI agents learn better from many steps.