Mode-Conditioning Unlocks Superior Test-Time Scaling
By: Chen Henry Wu, Sachin Goyal, Aditi Raghunathan
Potential Business Impact:
Makes AI smarter by exploring more ideas.
Parallel sampling promises substantial gains in test-time scaling, but its effectiveness is sharply limited by diversity collapse, where models concentrate on a few modes and repeated samples produce the same mistakes. We propose the mode-conditioning (ModC) framework, which explicitly allocates test-time compute across reasoning modes using either specialist models or mode-specific prefixes. ModC consistently improves scaling across controlled graph-search tasks and large-scale reasoning benchmarks, spanning model families and sizes from 0.5B to 7B. On OpenThoughts, fine-tuning Qwen2.5-7B with ModC achieves a 4x efficiency gain over standard training while also improving the maximum attainable Pass@k. We further show that gradient clustering enables ModC without explicit mode labels, yielding up to 10% gains on datasets such as NuminaMath. Finally, we show that ModC improves reinforcement learning (RL) and can further boost diversity-inducing RL methods. These results demonstrate that standard training underutilizes the diversity in data, and that ModC provides a simple, effective remedy for unlocking the full benefits of diversity in test-time scaling.
Similar Papers
Optimal Self-Consistency for Efficient Reasoning with Large Language Models
Machine Learning (CS)
Makes AI smarter with fewer guesses.
PMODE: Theoretically Grounded and Modular Mixture Modeling
Machine Learning (CS)
Helps computers find weird data in huge amounts.
Cream of the Crop: Harvesting Rich, Scalable and Transferable Multi-Modal Data for Instruction Fine-Tuning
CV and Pattern Recognition
Helps AI learn better from pictures and words.