Score: 0

GRPO-RM: Fine-Tuning Representation Models via GRPO-Driven Reinforcement Learning

Published: November 19, 2025 | arXiv ID: 2511.15256v1

By: Yanchen Xu , Ziheng Jiao , Hongyuan Zhang and more

Potential Business Impact:

Teaches AI to learn better from data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The Group Relative Policy Optimization (GRPO), a reinforcement learning method used to fine-tune large language models (LLMs), has proved its effectiveness in practical applications such as DeepSeek-R1. It raises a question whether GRPO can be generalized to representation learning models. In this paper, we propose Group Relative Policy Optimization for Representation Model (GRPO-RM), and investigate the performance of GRPO-like policy in post-training representation models. Specifically, our method establishes a predefined output set to functionally replace token sequence sampling in LLMs, thereby generating an output group, which is essential for the probability-driven optimization of GRPO. In addition, a specialized reward function is designed to accommodate the properties of representation models. Extensive experiments are conducted on various real-world datasets to validate the effectiveness of our proposed method.

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)