MAESTRO: Meta-learning Adaptive Estimation of Scalarization Trade-offs for Reward Optimization
By: Yang Zhao , Hepeng Wang , Xiao Ding and more
Potential Business Impact:
Helps AI balance different goals when writing.
Group-Relative Policy Optimization (GRPO) has emerged as an efficient paradigm for aligning Large Language Models (LLMs), yet its efficacy is primarily confined to domains with verifiable ground truths. Extending GRPO to open-domain settings remains a critical challenge, as unconstrained generation entails multi-faceted and often conflicting objectives - such as creativity versus factuality - where rigid, static reward scalarization is inherently suboptimal. To address this, we propose MAESTRO (Meta-learning Adaptive Estimation of Scalarization Trade-offs for Reward Optimization), which introduces a meta-cognitive orchestration layer that treats reward scalarization as a dynamic latent policy, leveraging the model's terminal hidden states as a semantic bottleneck to perceive task-specific priorities. We formulate this as a contextual bandit problem within a bi-level optimization framework, where a lightweight Conductor network co-evolves with the policy by utilizing group-relative advantages as a meta-reward signal. Across seven benchmarks, MAESTRO consistently outperforms single-reward and static multi-objective baselines, while preserving the efficiency advantages of GRPO, and in some settings even reducing redundant generation.
Similar Papers
MAESTRO: Multi-Agent Environment Shaping through Task and Reward Optimization
Machine Learning (CS)
Teaches AI to control traffic better using smart lessons.
Scalable Multi-Objective and Meta Reinforcement Learning via Gradient Estimation
Machine Learning (CS)
Groups similar robot tasks for faster learning.
Maestro: Learning to Collaborate via Conditional Listwise Policy Optimization for Multi-Agent LLMs
Artificial Intelligence
Helps AI teams solve harder problems better.