Accelerated Learning with Linear Temporal Logic using Differentiable Simulation
By: Alper Kamil Bozkurt, Calin Belta, Ming C. Lin
Potential Business Impact:
Teaches robots to follow rules safely and fast.
To ensure learned controllers comply with safety and reliability requirements for reinforcement learning in real-world settings remains challenging. Traditional safety assurance approaches, such as state avoidance and constrained Markov decision processes, often inadequately capture trajectory requirements or may result in overly conservative behaviors. To address these limitations, recent studies advocate the use of formal specification languages such as linear temporal logic (LTL), enabling the derivation of correct-by-construction learning objectives from the specified requirements. However, the sparse rewards associated with LTL specifications make learning extremely difficult, whereas dense heuristic-based rewards risk compromising correctness. In this work, we propose the first method, to our knowledge, that integrates LTL with differentiable simulators, facilitating efficient gradient-based learning directly from LTL specifications by coupling with differentiable paradigms. Our approach introduces soft labeling to achieve differentiable rewards and states, effectively mitigating the sparse-reward issue intrinsic to LTL without compromising objective correctness. We validate the efficacy of our method through experiments, demonstrating significant improvements in both reward attainment and training time compared to the discrete methods.
Similar Papers
Automatic Generation of Safety-compliant Linear Temporal Logic via Large Language Model: A Self-supervised Framework
Logic in Computer Science
Makes sure computer instructions are safe.
Expressive Temporal Specifications for Reward Monitoring
Machine Learning (CS)
Teaches robots to learn faster with better feedback.
Expressive Temporal Specifications for Reward Monitoring
Machine Learning (CS)
Teaches robots to learn tasks faster.