Balancing Rewards in Text Summarization: Multi-Objective Reinforcement Learning via HyperVolume Optimization
By: Junjie Song , Yiwen Liu , Dapeng Li and more
Potential Business Impact:
Makes summaries better by balancing different goals.
Text summarization is a crucial task that requires the simultaneous optimization of multiple objectives, including consistency, coherence, relevance, and fluency, which presents considerable challenges. Although large language models (LLMs) have demonstrated remarkable performance, enhanced by reinforcement learning (RL), few studies have focused on optimizing the multi-objective problem of summarization through RL based on LLMs. In this paper, we introduce hypervolume optimization (HVO), a novel optimization strategy that dynamically adjusts the scores between groups during the reward process in RL by using the hypervolume method. This method guides the model's optimization to progressively approximate the pareto front, thereby generating balanced summaries across multiple objectives. Experimental results on several representative summarization datasets demonstrate that our method outperforms group relative policy optimization (GRPO) in overall scores and shows more balanced performance across different dimensions. Moreover, a 7B foundation model enhanced by HVO performs comparably to GPT-4 in the summarization task, while maintaining a shorter generation length. Our code is publicly available at https://github.com/ai4business-LiAuto/HVO.git
Similar Papers
Topic-Guided Reinforcement Learning with LLMs for Enhancing Multi-Document Summarization
Computation and Language
Helps computers write better summaries from many stories.
Learning to Optimize Multi-Objective Alignment Through Dynamic Reward Weighting
Machine Learning (CS)
Teaches AI to balance many goals at once.
Advancing Speech Summarization in Multi-modal LLMs with Reinforcement Learning
Audio and Speech Processing
Makes computers understand spoken words better.