Context-Sensitive Abstractions for Reinforcement Learning with Parameterized Actions
By: Rashmeet Kaur Nayyar, Naman Shah, Siddharth Srivastava
Real-world sequential decision-making often involves parameterized action spaces that require both, decisions regarding discrete actions and decisions about continuous action parameters governing how an action is executed. Existing approaches exhibit severe limitations in this setting -- planning methods demand hand-crafted action models, and standard reinforcement learning (RL) algorithms are designed for either discrete or continuous actions but not both, and the few RL methods that handle parameterized actions typically rely on domain-specific engineering and fail to exploit the latent structure of these spaces. This paper extends the scope of RL algorithms to long-horizon, sparse-reward settings with parameterized actions by enabling agents to autonomously learn both state and action abstractions online. We introduce algorithms that progressively refine these abstractions during learning, increasing fine-grained detail in the critical regions of the state-action space where greater resolution improves performance. Across several continuous-state, parameterized-action domains, our abstraction-driven approach enables TD($λ$) to achieve markedly higher sample efficiency than state-of-the-art baselines.
Similar Papers
Geometry of Neural Reinforcement Learning in Continuous State and Action Spaces
Machine Learning (CS)
Makes robots learn faster by simplifying their movements.
Learning with Expert Abstractions for Efficient Multi-Task Continuous Control
Machine Learning (CS)
Teaches robots to learn new tasks faster.
Learning to Reason as Action Abstractions with Scalable Mid-Training RL
Machine Learning (CS)
Teaches computers to think better for tasks.