Implicit Constraint-Aware Off-Policy Correction for Offline Reinforcement Learning
By: Ali Baheri
Potential Business Impact:
Teaches computers to follow rules for better learning.
Offline reinforcement learning promises policy improvement from logged interaction data alone, yet state-of-the-art algorithms remain vulnerable to value over-estimation and to violations of domain knowledge such as monotonicity or smoothness. We introduce implicit constraint-aware off-policy correction, a framework that embeds structural priors directly inside every Bellman update. The key idea is to compose the optimal Bellman operator with a proximal projection on a convex constraint set, which produces a new operator that (i) remains a $\gamma$-contraction, (ii) possesses a unique fixed point, and (iii) enforces the prescribed structure exactly. A differentiable optimization layer solves the projection; implicit differentiation supplies gradients for deep function approximators at a cost comparable to implicit Q-learning. On a synthetic Bid-Click auction -- where the true value is provably monotone in the bid -- our method eliminates all monotonicity violations and outperforms conservative Q-learning and implicit Q-learning in return, regret, and sample efficiency.
Similar Papers
Imagination-Limited Q-Learning for Offline Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn from past mistakes.
Automatic Reward Shaping from Confounded Offline Data
Artificial Intelligence
Makes AI learn safely from bad past game experiences.
Adaptive Neighborhood-Constrained Q Learning for Offline Reinforcement Learning
Machine Learning (CS)
Helps robots learn from past mistakes safely.