FlowCritic: Bridging Value Estimation with Flow Matching in Reinforcement Learning
By: Shan Zhong , Shutong Ding , He Diao and more
Potential Business Impact:
Teaches computers to learn better by guessing values.
Reliable value estimation serves as the cornerstone of reinforcement learning (RL) by evaluating long-term returns and guiding policy improvement, significantly influencing the convergence speed and final performance. Existing works improve the reliability of value function estimation via multi-critic ensembles and distributional RL, yet the former merely combines multi point estimation without capturing distributional information, whereas the latter relies on discretization or quantile regression, limiting the expressiveness of complex value distributions. Inspired by flow matching's success in generative modeling, we propose a generative paradigm for value estimation, named FlowCritic. Departing from conventional regression for deterministic value prediction, FlowCritic leverages flow matching to model value distributions and generate samples for value estimation.
Similar Papers
Value Gradient Guidance for Flow Matching Alignment
Machine Learning (CS)
Makes AI art creation faster and better.
Fine-tuning Flow Matching Generative Models with Intermediate Feedback
Machine Learning (CS)
Makes AI pictures better match your words.
Reverse Flow Matching: A Unified Framework for Online Reinforcement Learning with Diffusion and Flow Policies
Machine Learning (CS)
Teaches robots to learn faster from mistakes.