A Rolling Stone Gathers No Moss: Adaptive Policy Optimization for Stable Self-Evaluation in Large Multimodal Models
By: Wenkai Wang , Hongcan Guo , Zheqi Lv and more
Potential Business Impact:
Helps AI learn to fix its own mistakes.
Self-evaluation, a model's ability to assess the correctness of its own output, is crucial for Large Multimodal Models (LMMs) to achieve self-improvement in multi-turn conversations, yet largely absent in foundation models. Recent work has employed reinforcement learning (RL) to enhance self-evaluation; however, its fixed reward mechanism suffers from reward hacking when optimizing multiple training objectives, leading to model collapse. In this paper we propose AdaPO, an online reinforcement learning framework capable of adaptively adjusting training objective in real time according to the current training state for each task. Specifically, to mitigate reward hacking , AdaPO introduces an Adaptive Reward Model (ARM) and a Reward Aware Dynamic KL Regularization mechanism. ARM assesses the task's training state from the distribution of model generated multi-turn trajectories' performance. Reward Aware Dynamic KL replaces a fixed penalty with dynamic coefficients which is modulated by the reward gap between different multi-turn situations. Notably, our method automatically and smoothly adjusts its learning focus based on sub-tasks' training progress without manual intervention. Extensive experiments over 8 benchmarks and various models show that our method significantly enhances both direct reasoning and self-evaluation capability. We will release our code to contribute to the community.
Similar Papers
Agentic Reinforced Policy Optimization
Machine Learning (CS)
Teaches AI to use tools better in conversations.
Multi-Objective Reward and Preference Optimization: Theory and Algorithms
Machine Learning (CS)
Teaches computers to make safe, smart choices.
RRPO: Robust Reward Policy Optimization for LLM-based Emotional TTS
Sound
Makes computer voices sound more real and emotional.