Score: 0

Logic-based Task Representation and Reward Shaping in Multiagent Reinforcement Learning

Published: October 16, 2025 | arXiv ID: 2510.23615v1

By: Nishant Doshi

Potential Business Impact:

Teaches robots to work together faster.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

This paper presents an approach for accelerated learning of optimal plans for a given task represented using Linear Temporal Logic (LTL) in multi-agent systems. Given a set of options (temporally abstract actions) available to each agent, we convert the task specification into the corresponding Buchi Automaton and proceed with a model-free approach which collects transition samples and constructs a product Semi Markov Decision Process (SMDP) on-the-fly. Value-based Reinforcement Learning algorithms can then be used to synthesize a correct-by-design controller without learning the underlying transition model of the multi-agent system. The exponential sample complexity due to multiple agents is dealt with using a novel reward shaping approach. We test the proposed algorithm in a deterministic gridworld simulation for different tasks and find that the reward shaping results in significant reduction in convergence times. We also infer that using options becomes increasing more relevant as the state and action space increases in multi-agent systems.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Multiagent Systems