Evidence-Augmented Policy Optimization with Reward Co-Evolution for Long-Context Reasoning
By: Xin Guan , Zijian Li , Shen Huang and more
While Reinforcement Learning (RL) has advanced LLM reasoning, applying it to long-context scenarios is hindered by sparsity of outcome rewards. This limitation fails to penalize ungrounded "lucky guesses," leaving the critical process of needle-in-a-haystack evidence retrieval largely unsupervised. To address this, we propose EAPO (Evidence-Augmented Policy Optimization). We first establish the Evidence-Augmented Reasoning paradigm, validating via Tree-Structured Evidence Sampling that precise evidence extraction is the decisive bottleneck for long-context reasoning. Guided by this insight, EAPO introduces a specialized RL algorithm where a reward model computes a Group-Relative Evidence Reward, providing dense process supervision to explicitly improve evidence quality. To sustain accurate supervision throughout training, we further incorporate an Adaptive Reward-Policy Co-Evolution mechanism. This mechanism iteratively refines the reward model using outcome-consistent rollouts, sharpening its discriminative capability to ensure precise process guidance. Comprehensive evaluations across eight benchmarks demonstrate that EAPO significantly enhances long-context reasoning performance compared to SOTA baselines.
Similar Papers
EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning
Computation and Language
Helps computers plan and win complex games.
ExPO: Unlocking Hard Reasoning with Self-Explanation-Guided Reinforcement Learning
Machine Learning (CS)
Helps computers learn to solve hard problems.
EEPO: Exploration-Enhanced Policy Optimization via Sample-Then-Forget
Computation and Language
Helps AI learn new things by forgetting and trying again.