Owen-Shapley Policy Optimization (OSPO): A Principled RL Algorithm for Generative Search LLMs
By: Abhijnan Nath , Alireza Bagheri Garakani , Tianchen Zhou and more
Large language models are increasingly trained via reinforcement learning for personalized recommendation tasks, but standard methods like GRPO rely on sparse, sequence-level rewards that create a credit assignment gap, obscuring which tokens drive success. This gap is especially problematic when models must infer latent user intent from under-specified language without ground truth labels, a reasoning pattern rarely seen during pretraining. We introduce Owen-Shapley Policy Optimization (OSPO), a framework that redistributes sequence-level advantages based on tokens' marginal contributions to outcomes. Unlike value-model-based methods requiring additional computation, OSPO employs potential-based reward shaping via Shapley-Owen attributions to assign segment-level credit while preserving the optimal policy, learning directly from task feedback without parametric value models. By forming coalitions of semantically coherent units (phrases describing product attributes or sentences capturing preferences), OSPO identifies which response parts drive performance. Experiments on Amazon ESCI and H&M Fashion datasets show consistent gains over baselines, with notable test-time robustness to out-of-distribution retrievers unseen during training.
Similar Papers
ESPO: Entropy Importance Sampling Policy Optimization
Machine Learning (CS)
Makes AI better at solving math problems.
SSPO: Subsentence-level Policy Optimization
Computation and Language
Makes AI smarter and learn from mistakes better.
Soft Adaptive Policy Optimization
Machine Learning (CS)
Teaches AI to learn better and faster.