Score: 0

Expressive Temporal Specifications for Reward Monitoring

Published: November 16, 2025 | arXiv ID: 2511.12808v1

By: Omar Adalat, Francesco Belardinelli

Potential Business Impact:

Teaches robots to learn tasks faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Specifying informative and dense reward functions remains a pivotal challenge in Reinforcement Learning, as it directly affects the efficiency of agent training. In this work, we harness the expressive power of quantitative Linear Temporal Logic on finite traces (($\text{LTL}_f[\mathcal{F}]$)) to synthesize reward monitors that generate a dense stream of rewards for runtime-observable state trajectories. By providing nuanced feedback during training, these monitors guide agents toward optimal behaviour and help mitigate the well-known issue of sparse rewards under long-horizon decision making, which arises under the Boolean semantics dominating the current literature. Our framework is algorithm-agnostic and only relies on a state labelling function, and naturally accommodates specifying non-Markovian properties. Empirical results show that our quantitative monitors consistently subsume and, depending on the environment, outperform Boolean monitors in maximizing a quantitative measure of task completion and in reducing convergence time.

Country of Origin
🇬🇧 United Kingdom

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)