Global Optimality of Single-Timescale Actor-Critic under Continuous State-Action Space: A Study on Linear Quadratic Regulator
By: Xuyang Chen, Jingliang Duan, Lin Zhao
Potential Business Impact:
Makes smart programs learn better in complex worlds.
Actor-critic methods have achieved state-of-the-art performance in various challenging tasks. However, theoretical understandings of their performance remain elusive and challenging. Existing studies mostly focus on practically uncommon variants such as double-loop or two-timescale stepsize actor-critic algorithms for simplicity. These results certify local convergence on finite state- or action-space only. We push the boundary to investigate the classic single-sample single-timescale actor-critic on continuous (infinite) state-action space, where we employ the canonical linear quadratic regulator (LQR) problem as a case study. We show that the popular single-timescale actor-critic can attain an epsilon-optimal solution with an order of epsilon to -2 sample complexity for solving LQR on the demanding continuous state-action space. Our work provides new insights into the performance of single-timescale actor-critic, which further bridges the gap between theory and practice.
Similar Papers
Actor-Critics Can Achieve Optimal Sample Efficiency
Machine Learning (Stat)
Teaches computers to learn faster with less data.
Actor-Free Continuous Control via Structurally Maximizable Q-Functions
Machine Learning (CS)
Teaches robots to learn actions without guessing.
Optimal Output Feedback Learning Control for Discrete-Time Linear Quadratic Regulation
Systems and Control
Teaches robots to learn how to control things.