GRPO-RM: Fine-Tuning Representation Models via GRPO-Driven Reinforcement Learning
By: Yanchen Xu , Ziheng Jiao , Hongyuan Zhang and more
Potential Business Impact:
Teaches AI to learn better from data.
The Group Relative Policy Optimization (GRPO), a reinforcement learning method used to fine-tune large language models (LLMs), has proved its effectiveness in practical applications such as DeepSeek-R1. It raises a question whether GRPO can be generalized to representation learning models. In this paper, we propose Group Relative Policy Optimization for Representation Model (GRPO-RM), and investigate the performance of GRPO-like policy in post-training representation models. Specifically, our method establishes a predefined output set to functionally replace token sequence sampling in LLMs, thereby generating an output group, which is essential for the probability-driven optimization of GRPO. In addition, a specialized reward function is designed to accommodate the properties of representation models. Extensive experiments are conducted on various real-world datasets to validate the effectiveness of our proposed method.
Similar Papers
Training-Free Group Relative Policy Optimization
Computation and Language
Teaches computers to solve new problems better.
Multi-Layer GRPO: Enhancing Reasoning and Self-Correction in Large Language Models
Machine Learning (CS)
Teaches computers to fix their own mistakes.
Group Causal Policy Optimization for Post-Training Large Language Models
Machine Learning (CS)
Makes AI better at choosing the best answers.