Stationary Reweighting Yields Local Convergence of Soft Fitted Q-Iteration
By: Lars van der Laan, Nathan Kallus
Fitted Q-iteration (FQI) and its entropy-regularized variant, soft FQI, are central tools for value-based model-free offline reinforcement learning, but can behave poorly under function approximation and distribution shift. In the entropy-regularized setting, we show that the soft Bellman operator is locally contractive in the stationary norm of the soft-optimal policy, rather than in the behavior norm used by standard FQI. This geometric mismatch explains the instability of soft Q-iteration with function approximation in the absence of Bellman completeness. To restore contraction, we introduce stationary-reweighted soft FQI, which reweights each regression update using the stationary distribution of the current policy. We prove local linear convergence under function approximation with geometrically damped weight-estimation errors, assuming approximate realizability. Our analysis further suggests that global convergence may be recovered by gradually reducing the softmax temperature, and that this continuation approach can extend to the hardmax limit under a mild margin condition.
Similar Papers
A Unifying View of Linear Function Approximation in Off-Policy RL Through Matrix Splitting and Preconditioning
Machine Learning (CS)
Unifies learning methods, making them faster and more reliable.
Gaussian-Mixture-Model Q-Functions for Policy Iteration in Reinforcement Learning
Machine Learning (CS)
Teaches computers to make better choices faster.
Diffusion Fine-Tuning via Reparameterized Policy Gradient of the Soft Q-Function
Machine Learning (CS)
Makes AI art look better and more natural.