Parent-Guided Semantic Reward Model (PGSRM): Embedding-Based Reward Functions for Reinforcement Learning of Transformer Language Models
By: Alexandr Plashchinsky
Potential Business Impact:
Teaches computers to write better using a smart trick.
We introduce the Parent-Guided Semantic Reward Model (PGSRM), a lightweight reward framework for reinforcement learning (RL) of transformer language models. PGSRM replaces binary correctness signals, human preference data, and trained reward models with a simple signal: cosine similarity between a parent model's reference output embedding and a child model's generated output for the same input. This yields a dense, semantically meaningful reward with no human annotation or additional model training. We apply PGSRM on five language tasks and find that it produces smoother reward improvement and more stable PPO dynamics than a binary reward baseline, suggesting that embedding-based semantic rewards are a practical alternative to RLHF-style reward modeling for parent-guided alignment in smaller transformer models.
Similar Papers
Shaping Explanations: Semantic Reward Modeling with Encoder-Only Transformers for GRPO
Computation and Language
Teaches AI to explain things clearly and correctly.
LinguaFluid: Language Guided Fluid Control via Semantic Rewards in Reinforcement Learning
Machine Learning (CS)
Teaches robots to follow written instructions.
LinguaFluid: Language Guided Fluid Control via Semantic Rewards in Reinforcement Learning
Machine Learning (CS)
Teaches robots to follow written instructions.