Score: 0

Consolidation or Adaptation? PRISM: Disentangling SFT and RL Data via Gradient Concentration

Published: January 12, 2026 | arXiv ID: 2601.07224v1

By: Yang Zhao , Yangou Ouyang , Xiao Ding and more

Potential Business Impact:

Teaches AI better by sorting its learning tasks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

While Hybrid Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has become the standard paradigm for training LLM agents, effective mechanisms for data allocation between these stages remain largely underexplored. Current data arbitration strategies often rely on surface-level heuristics that fail to diagnose intrinsic learning needs. Since SFT targets pattern consolidation through imitation while RL drives structural adaptation via exploration, misaligning data with these functional roles causes severe optimization interference. We propose PRISM, a dynamics-aware framework grounded in Schema Theory that arbitrates data based on its degree of cognitive conflict with the model's existing knowledge. By analyzing the spatial geometric structure of gradients, PRISM identifies data triggering high spatial concentration as high-conflict signals that require RL for structural restructuring. In contrast, data yielding diffuse updates is routed to SFT for efficient consolidation. Extensive experiments on WebShop and ALFWorld demonstrate that PRISM achieves a Pareto improvement, outperforming state-of-the-art hybrid methods while reducing computational costs by up to 3.22$\times$. Our findings suggest that disentangling data based on internal optimization regimes is crucial for scalable and robust agent alignment.

Country of Origin
πŸ‡¨πŸ‡³ China

Page Count
14 pages

Category
Computer Science:
Artificial Intelligence