ARCH-COMP24: Editor's PrefaceThis volume contains the papers presented at the 11th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH), as well as the results of the 8th edition of ARCH-COMP, a competition for the formal verification of continuous and hybrid systems. The workshop was held on Wednesday, July 3, 2024, in Boulder, Colorado, USA, as part of the 8th IFAC Conference on Analysis and Design of Hybrid Systems. Previous editions of the ARCH workshop series were held in 2014 in Berlin, 2015 in Seattle, 2016 in Vienna, 2017 in Pittsburgh, 2018 in Oxford, 2019 in Montreal, 2022 in Munich, and 2023 in San Antonio, while 2020 and 2021 were held online. The ARCH workshops aim to bring together people from industry with researchers and tool developers interested in applying verification techniques to continuous and hybrid systems. The workshops are accompanied by a collaborative website (cps-vo.org/group/ARCH), which features a curated collection of benchmarks, disseminates results submitted by researchers and tool developers, and provides feedback from practitioners in the form of experience reports. The benchmark repository is intended to serve as a lasting and evolving resource to the research community. The workshop received three regular paper submissions, all of which were accepted by the program committee. Each submission was reviewed by three program committee members, including at least one member from academia and one from industry. In addition to the workshop papers, these proceedings present the results of the 8th edition of ARCH-COMP, a friendly competition that was carried out online from April to July 2024. ARCH-COMP showcases the participating tools and serves as a testing ground to see which methods are particularly suitable for which types of problems. As a side effect, it aims to establish a consensus for comparing different software implementations in the context of verification, as reviewers routinely demand such comparisons of scientific publications. All participating tools were represented in the competition jury headed by the organizers. In the problem phase of the competition, participants submitted problem instances, which were then approved by the jury by consensus. In most categories, participants submitted a code package, and the performance measurements were run centrally under the supervision of Taylor T. Johnson. As indicated in the reports, participants who could not submit executable code carried out the performance measurements. To establish further trustworthiness of the results, the code with which the results have been obtained is available on the ARCH website. The problem descriptions and the results are provided in a report for each category, drafted by the category lead and representatives of the participating tools. Due to the diversity of problems, ARCH-COMP does not provide any ranking of tools. Nonetheless, the presented results provide the most complete assessment of tools for the safety verification of continuous and hybrid systems up to this date. Matthias Althoff
Goran Frehse August 12, 2024
Munich, Palaiseau |