SPaRFT: Self-Paced Reinforcement Fine-Tuning for Large Language Models
By: Dai Do , Manh Nguyen , Svetha Venkatesh and more
Potential Business Impact:
Teaches computers to learn smarter, faster, with less data.
Large language models (LLMs) have shown strong reasoning capabilities when fine-tuned with reinforcement learning (RL). However, such methods require extensive data and compute, making them impractical for smaller models. Current approaches to curriculum learning or data selection are largely heuristic-driven or demand extensive computational resources, limiting their scalability and generalizability. We propose \textbf{SPaRFT}, a self-paced learning framework that enables efficient learning based on the capability of the model being trained through optimizing which data to use and when. First, we apply \emph{cluster-based data reduction} to partition training data by semantics and difficulty, extracting a compact yet diverse subset that reduces redundancy. Then, a \emph{multi-armed bandit} treats data clusters as arms, optimized to allocate training samples based on model current performance. Experiments across multiple reasoning benchmarks show that SPaRFT achieves comparable or better accuracy than state-of-the-art baselines while using up to \(100\times\) fewer samples. Ablation studies and analyses further highlight the importance of both data clustering and adaptive selection. Our results demonstrate that carefully curated, performance-driven training curricula can unlock strong reasoning abilities in LLMs with minimal resources.
Similar Papers
Efficient Reinforcement Finetuning via Adaptive Curriculum Learning
Machine Learning (CS)
Teaches computers math faster and better.
Mitigating Forgetting Between Supervised and Reinforcement Learning Yields Stronger Reasoners
Computation and Language
Makes AI smarter by learning from mistakes.
Reassessing the Role of Supervised Fine-Tuning: An Empirical Study in VLM Reasoning
Machine Learning (CS)
Makes AI better at thinking, even small ones.