First-order Sobolev Reinforcement Learning
By: Fabian Schramm, Nicolas Perrin-Gilbert, Justin Carpentier
Potential Business Impact:
Teaches computers to learn faster and more reliably.
We propose a refinement of temporal-difference learning that enforces first-order Bellman consistency: the learned value function is trained to match not only the Bellman targets in value but also their derivatives with respect to states and actions. By differentiating the Bellman backup through differentiable dynamics, we obtain analytically consistent gradient targets. Incorporating these into the critic objective using a Sobolev-type loss encourages the critic to align with both the value and local geometry of the target function. This first-order TD matching principle can be seamlessly integrated into existing algorithms, such as Q-learning or actor-critic methods (e.g., DDPG, SAC), potentially leading to faster critic convergence and more stable policy gradients without altering their overall structure.
Similar Papers
Reinforcement Learning From State and Temporal Differences
Machine Learning (CS)
Teaches computers to make better decisions.
Quasi-Newton Compatible Actor-Critic for Deterministic Policies
Machine Learning (CS)
Teaches computers to learn faster by watching mistakes.
Reinforcement Learning with Imperfect Transition Predictions: A Bellman-Jensen Approach
Machine Learning (CS)
Helps computers make better choices with future guesses.