Expressive Temporal Specifications for Reward Monitoring
By: Omar Adalat, Francesco Belardinelli
Potential Business Impact:
Teaches robots to learn faster with better feedback.
Specifying informative and dense reward functions remains a pivotal challenge in Reinforcement Learning, as it directly affects the efficiency of agent training. In this work, we harness the expressive power of quantitative Linear Temporal Logic on finite traces (($\text{LTL}_f[\mathcal{F}]$)) to synthesize reward monitors that generate a dense stream of rewards for runtime-observable state trajectories. By providing nuanced feedback during training, these monitors guide agents toward optimal behaviour and help mitigate the well-known issue of sparse rewards under long-horizon decision making, which arises under the Boolean semantics dominating the current literature. Our framework is algorithm-agnostic and only relies on a state labelling function, and naturally accommodates specifying non-Markovian properties. Empirical results show that our quantitative monitors consistently subsume and, depending on the environment, outperform Boolean monitors in maximizing a quantitative measure of task completion and in reducing convergence time.
Similar Papers
Expressive Temporal Specifications for Reward Monitoring
Machine Learning (CS)
Teaches robots to learn tasks faster.
Expressive Reward Synthesis with the Runtime Monitoring Language
Machine Learning (CS)
Teaches robots to learn complex tasks better.
Expressive Reward Synthesis with the Runtime Monitoring Language
Machine Learning (CS)
Teaches robots to learn complex tasks better.