Score: 1

Relative Trajectory Balance is equivalent to Trust-PCL

Published: September 1, 2025 | arXiv ID: 2509.01632v1

By: Tristan Deleu , Padideh Nouri , Yoshua Bengio and more

Potential Business Impact:

Makes AI models create better, more focused results.

Business Areas:
A/B Testing Data and Analytics

Recent progress in generative modeling has highlighted the importance of Reinforcement Learning (RL) for fine-tuning, with KL-regularized methods in particular proving to be highly effective for both autoregressive and diffusion models. Complementing this line of work, the Relative Trajectory Balance (RTB) objective was recently introduced in the context of Generative Flow Networks (GFlowNets) to serve the same role of improving fine-tuning in sequential generative models. Building on prior work linking GFlowNets and maximum-entropy RL, we establish in this paper an equivalence between RTB and Trust-PCL, an off-policy RL method with KL regularization. This equivalence situates RTB within the broader theoretical landscape of KL-regularized RL, and clarifies its relationship to earlier methods. Leveraging this insight, we revisit an illustrative example from the RTB paper and show that KL-regularized RL methods achieve comparable performance, offering an alternative perspective to what was previously reported.

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)