When Natural Strategies Meet Fuzziness and Resource-Bounded Actions (Extended Version)
By: Marco Aruta , Francesco Improta , Vadim Malvone and more
In formal strategic reasoning for Multi-Agent Systems (MAS), agents are typically assumed to (i) employ arbitrarily complex strategies, (ii) execute each move at zero cost, and (iii) operate over fully crisp game structures. These idealized assumptions stand in stark contrast with human decision making in real world environments. The natural strategies framework along with some of its recent variants, partially addresses this gap by restricting strategies to concise rules guarded by regular expressions. Yet, it still overlook both the cost of each action and the uncertainty that often characterizes human perception of facts over the time. In this work, we introduce HumanATLF, a logic that builds upon natural strategies employing both fuzzy semantics and resource bound actions: each action carries a real valued cost drawn from a non refillable budget, and atomic conditions and goals have degrees in [0,1]. We give a formal syntax and semantics, and prove that model checking is in P when both the strategy complexity k and resource budget b are fixed, NP complete if just one strategic operator over Boolean objectives is allowed, and Delta^P_2 complete when k and b vary. Moreover, we show that recall based strategies can be decided in PSPACE. We implement our algorithms in VITAMIN, an open source model checking tool for MAS and validate them on an adversarial resource aware drone rescue scenario.
Similar Papers
Strategic Tradeoffs Between Humans and AI in Multi-Agent Bargaining
Artificial Intelligence
Computers and people negotiate deals differently.
Mathematical Framing for Different Agent Strategies
Artificial Intelligence
Helps AI agents make better choices.
Hypergame Rationalisability: Solving Agent Misalignment In Strategic Play
Artificial Intelligence
Helps AI understand why people make bad choices.