Download PDFOpen PDF in browser

Correctness-Guaranteed Strategy Synthesis and Compression for Multi-Agent Autonomous Systems

EasyChair Preprint no. 14182

27 pagesDate: July 26, 2024

Abstract

Planning is a critical function of multi-agent autonomous systems, which includes path finding and task scheduling. Exhaustive search-based methods such as model checking and algorithmic game theory can solve simple instances of multi-agent planning. However, these methods suffer from state-space explosion when the number of agents is large. Learning-based methods can alleviate this problem, but lack a guarantee of correctness of the results. In this paper, we introduce MoCReL, a new version of our previously proposed method that combines model checking with reinforcement learning in solving the planning problem. The approach takes advantage of reinforcement learning to synthesize path plans and task schedules for large numbers of autonomous agents, and of model checking to verify the correctness of the synthesized strategies. Further, MoCReL can compress large strategies into smaller ones that have down to 0.05% of the original sizes, while preserving their correctness, which we show in this paper. MoCReL is integrated into a new version of Uppaal Stratego that supports calling external libraries when running learning and verification of timed games models.

https://doi.org/10.1016/j.scico.2022.102894

Keyphrases: Journal first, Multi-Agent Autonomous Systems, timed games

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:14182,
  author = {Rong Gu and Peter Jensen and Cristina Seceleanu and Eduard Enoiu and Kristina Lundqvist},
  title = {Correctness-Guaranteed Strategy Synthesis and Compression for Multi-Agent Autonomous Systems},
  howpublished = {EasyChair Preprint no. 14182},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser