The 5th Reactive Synthesis Competition (SYNTCOMP 2018): Benchmarks, Participants & Results
Abstract
We report on the fifth reactive synthesis competition (SYNTCOMP 2018). We introduce four new benchmark classes that have been added to the SYNTCOMP library, and briefly describe the evaluation scheme and the experimental setup of SYNTCOMP 2018. We give an overview of the participants of SYNTCOMP 2018 and highlight changes compared to previous years. Finally, we present and analyze the results of our experimental evaluation, including a ranking of tools with respect to quantity and quality of solutions.
1 Introduction
The synthesis of reactive systems from formal specifications, as first defined by Church [12], is one of the major challenges of computer science. Until recently, research focused on theoretical results, with little impact on the practice of system design. Since 2014, the reactive synthesis competition (SYNTCOMP) strives to increase the practical impact of these theoretical advancements [24]. SYNTCOMP is designed to foster research in scalable and userfriendly implementations of synthesis techniques by establishing a standard benchmark format, maintaining a challenging public benchmark library, and providing an independent platform for the comparison of tools under consistent experimental conditions.
SYNTCOMP is held annually, and competition results are presented at the International Conference on Computer Aided Verification (CAV) and the Workshop on Synthesis (SYNT). [24, 26, 25, 22] This year, like in its inaugural edition, SYNTCOMP was part of the FLoC Olympic Games. For the first two competitions, SYNTCOMP was limited to safety properties specified as monitor circuits in an extension of the AIGER format [21]. SYNTCOMP 2016 introduced a new track that is based on properties in full linear temporal logic (LTL), given in the temporal logic synthesis format (TLSF) [23, 27].
The organization team of SYNTCOMP 2018 consisted of R. Bloem and S. Jacobs.
Outline.
In Section 2, we present four new benchmark classes that have been added to the SYNTCOMP library for SYNTCOMP 2018. In Section 3, we briefly describe the setup, rules and execution of the competition, followed by an overview of the participants in Section 4. Finally, we provide the experimental analysis in Section 5 and concluding remarks in Section 6.
2 New Benchmarks
In this section, we describe benchmarks added to the SYNTCOMP library since the last iteration of the competition. For more details on the complete benchmark library, we refer to the previous competition reports [24, 26, 25, 22].
2.1 Benchmark Set: Temporal Stream Logic (TSL)
This set of benchmarks stems from work by Finkbeiner et al. [17] on the synthesis of functional reactive programs (FRPs) that handle data streams. They are originally specified in temporal stream logic (TSL) and are not parameterized. The benchmark set covers:

toy examples like simple buttons or incrementors,

different versions of an escalator controller,

a music app based on the Android music player library,

a simple game where the player has to react to the movement of a slider,

controllers for simulated autonomous cars in the The open racing car simulator (TORCS) framework [40], and

three benchmarks based on the FRP Zoo project [19] that specify the behavior of a program with buttons, a counter, and a display.
The set contains benchmarks that have been translated to TLSF by F. Klein and M. Santolucito.
2.2 Benchmark Set: Hardware Components
This benchmark set comprises several standard hardware components, such as a mux, an ary latch, a shifter, several variants of a collector that signals if all of its clients have sent a specific signal, as well as several new arbiter variants.
The set contains parameterized benchmarks, encoded in TLSF by F. Klein.
2.3 Benchmark: Tic Tac Toe
An infinite variant of Tic Tac Toe with repeated games, where each can end in a win, loss, or draw. The system player has to ensure that they never lose a game. This benchmark is not parameterized and has been encoded in TLSF by F. Klein.
2.4 Benchmark Set: Abstractionbased Control
This set of benchmarks is based on the research on abstractionbased control synthesis by Nilsson, Liu and Ozay [33]. They are based on an abstraction of physical systems governed by ordinary differential equations to finitestate transition systems. The benchmark set contains a toy example and four variants of a linear inverted pendulum model.
In preliminary experiments, the organizers detected that some of these examples are so big that already the translation from TLSF to different other formats takes more than 20 min. Therefore, these benchmarks have not been used in SYNTCOMP 2018, and will instead be posted as challenge benchmarks for the community to encourage further development of algorithms and solvers.
These benchmarks have been encoded into TLSF by Z. Liu.
3 Setup, Rules and Execution
We give a brief overview of the setup, rules and execution of SYNTCOMP 2018. Up to the selection of benchmarks for the LTL (TLSF) track, all of this is identical to the setting of SYNTCOMP 2017 (cf. the report of SYNTCOMP 2017 [22] for more details).
General Rules.
The competition has two main tracks, one based on safety specifications in AIGER format, and the other on full LTL specifications in TLSF. The tracks are divided into subtracks for realizability checking and synthesis, and into two execution modes: sequential (using a single core of the CPU) and parallel (using up to cores).
Every tool can run in up to three configurations per subtrack and execution mode. Before the competition, all tools are tested on a small benchmark set, and authors can submit bugfixes if problems are found.
Ranking Schemes.
In all tracks, there is a ranking based on the number of correctly solved problems within a s timeout. In the synthesis tracks, correctness of the solution additionally has to be confirmed by a model checker.
Furthermore, in synthesis tracks there is a ranking based on the quality of the solution, measured by the number of gates in the produced AIGER circuit. To this end, the size of the solution is compared to the size of a reference solution. A correct answer of size is rewarded points, and smaller or larger solutions are awarded more or less points, respectively.
Selection of Benchmarks.
Benchmarks are selected according to the same scheme as in previous years, based on a categorization into different classes. For the safety (AIGER track), benchmarks were selected — the same number of benchmarks per class, but a different set of benchmarks unless all benchmarks from a class are selected. For the LTL (TLSF) track, benchmark instances were selected. This includes parameterized benchmarks, each in up to instances.^{1}^{1}1Out of the new parameterized benchmarks, were selected. Only the collector_enc benchmark was not used since it contains a “Preset” block that is not supported by all competing tools.
Execution.
Like in the last three years, SYNTCOMP 2018 was run at Saarland University, on a set of identical machines with a single quadcore Intel Xeon processor (E31271 v3, 3.6GHz) and 32 GB RAM (PC1600, ECC). Benchmarking was again organized on the EDACC platform [5].
The model checker for checking correctness of solutions for the safety (AIGER) track is IIMC^{2}^{2}2IIMC used to be available at ftp://vlsi.colorado.edu/pub/iimc/, but is currently unavailable as of April 2019. in version 2.0, preceded by an invariant check^{3}^{3}3Available from the SYNTCOMP repository at https://bitbucket.org/swenjacobs/syntcomp/src, in subdirectory tools/WinRegCheck. Accessed April 2019. for solvers that supply an inductive invariant in addition to their solution. For the LTL (TLSF) track, the model checker used was V3^{4}^{4}4V3 is available at https://github.com/chengyinwu/V3. Accessed April 2019. [39].
4 Participants
Ten tools participated competitively in SYNTCOMP 2018: five in the safety track, and five in the LTL track. In addition, two tools were entered hors concours in the safety track. We briefly describe the participants and give pointers to additional information. For additional details on the implemented techniques and optimizations, we refer to the previous SYNTCOMP reports [24, 26, 25, 22].
4.1 Safety Track
Four of the five competitive participants in this track ran in the same version as in SYNTCOMP 2017. The only one that received an update is Simple BDD Solver. One additional tool was entered hors concours by M. Sakr and competition organizer S. Jacobs.
Updated Tool: Simple BDD Solver
An update of Simple BDD Solver was submitted by A. Walker, and competed in the realizability track. Simple BDD Solver implements the classical BDDbased fixpoint algorithm for safety games. In sequential mode, it runs in three configurations, two of which are based on an abstractionrefinement approach inspired by de Alfaro and Roy [3] (configurations abs1 and abs2), and one without any abstraction. All three implement many important optimizations. These configurations are the same as last year.
Additionally, three new configurations have been entered for the parallel mode, which run different portfolios of the algorithms in the sequential mode.
Implementation, Availability
The source code of Simple BDD Solver is available online at https://github.com/adamwalker/syntcomp.
Reentered: Swiss AbsSynthe v2.1
AbsSynthe was submitted by R. Brenguier, G. A. Pérez, J.F. Raskin, and O. Sankur, and competed in the realizability and the synthesis tracks. It also implements the classical BDDbased algorithm for safety games, and additionally supports decomposition of the problem into independent subgames, as well as an abstraction approach [10, 11, 22]. It competes in its bestperforming configurations from last year:

sequential configuration (SC2) uses abstraction, but no compositionality,

sequential configuration (SC3) uses a compositional algorithm, combined with an abstraction method, and

parallel configuration (PC1) is a portfolio of configurations with different combinations of abstraction and compositionality.
Implementation, Availability
The source code of AbsSynthe is is available at https://github.com/gaperez64/AbsSynthe/tree/nativedevpar.
Reentered: Demiurge 1.2.0
Demiurge was submitted by R. Könighofer and M. Seidl, and competed in the realizability and the synthesis tracks. Demiurge implements SATbased methods for solving safety games [8, 36]. In the competition, three different methods are used — one of them as the only method in sequential configuration (D1real), and a combination of all three methods in parallel configuration (P3real). This year, Demiurge competed in the same version as in previous years.
Implementation, Availability
The source code of Demiurge is available at %https://www.iaik.tugraz.at/content/research/opensource/demiurge/ under the GNU LGPL version 3.
Reentered: SafetySynth
SafetySynth was submitted by L. Tentrup, and competed in both the realizability and the synthesis track. SafetySynth implements the classical BDDbased algorithm for safety games, using the optimizations that were most beneficial for BDDbased tools in SYNTCOMP 2014 and 2015 [24, 26]. It competed in the single most successful configuration from last year.
Implementation, Availability
The source code of SafetySynth is available online at: https://www.react.unisaarland.de/tools/safetysynth/.
Reentered: TermiteSAT
TermiteSAT was submitted by A. Legg, N. Narodytska and L. Ryzhyk, and competed in the realizability track (parallel mode). TermiteSAT implements a SATbased method for solving safety games based on Craig interpolation. It competes in its most successful configuration from last year, which implements a hybrid method that runs this algorithm alongside one of the algorithms of Simple BDD Solver [30], with communication of intermediate results between the different algorithms.
Implementation, Availability
The source code of TermiteSAT is available online at: https://github.com/alexlegg/TermiteSAT.
Hors Concours: LazySynt — Symbolic Lazy Synthesis
LazySynt has been submitted by M. Sakr and S. Jacobs. It participated hors concours in the synthesis track. In contrast to the classical BDDbased algorithm and the SATbased methods implemented in Demiurge, LazySynt implements a combined forwardbackward search that is embedded into a refinement loop, generating candidate solutions that are checked and refined with a combination of backward model checking and forward generation of additional constraints. [28]
Implementation, Availability
LazySynt is implemented in Python and uses the CUDD library for BDD operations and ABC for compression of AIGER solutions.
4.2 LTLTrack
This track had five participants in 2018, one of which is a new entrant that has not participated in previous competitions. Four tools competed in both the realizability and the synthesis track, one only in the realizability track. We briefly describe the synthesis approach and implemented optimizations of the new tool, followed by an overview of the changes in reentered tools. For additional details on the latter, we refer to the reports of SYNTCOMP 2016 and 2017 [25, 22].
New Entrant: Strix
Strix was submitted by P. J. Meyer, S. Sickert and M. Luttenberger. It competed in both the realizability and the synthesis track.
Strix implements a translation of the LTL synthesis problem to parity games. At its core, it uses explicitstate game solving based on strategy iteration [7, 35, 32]. To make the approach efficient, it uses two main optimizations:

The specification is decomposed into smaller parts that can be more efficiently translated to an automata representation. This usually results in a large number of simple safety and cosafety conditions, and only few conditions that require automata with more complicated acceptance conditions. Moreover, this step also detects and exploits symmetry in the specification.

Game solving begins before the full specification is translated, and missing parts of the specification are only translated into the automata representation if they are relevant for solving the game.
Strix competed in the following configurations:

one configuration each for the sequential and the parallel realizability tracks (with the only difference being the number of running threads),

three configurations for the sequential synthesis track, which only differ in the construction of the AIGER solution from the Mealy machine that is constructed as an intermediate result: configuration basic directly encodes the Mealy machine into AIGER, configuration min minimizes the number of states in the Mealy machine before translating it to AIGER, and configuration labels uses a labeling of states in the Mealy machine based on the product automaton, and assigns a separate set of latches in the AIGER circuit to each component of the product.

in parallel synthesis mode, Strix invokes the three sequential synthesis configurations in parallel, as well as an additional method that combines the approaches of min and labels.
Implementation, Availability
Strix is implemented in Java and C++. It uses MeMin^{5}^{5}5MeMin is available at https://embedded.cs.unisaarland.de/MeMin.php. Accessed April 2019. [2] for the minimization of Mealy machines constructed from winning strategies, and ABC ^{6}^{6}6ABC is available at https://github.com/berkeleyabc/abc. Accessed April 2019. [9] for the compression of AIGER circuits obtained from these Mealy machines.
The source code of Strix is available at https://strix.model.in.tum.de/.
Updated Tool: BoWSer
BoWSer was submitted by B. Finkbeiner and F. Klein. It implements different extensions of the bounded synthesis approach [18] that solves the LTL synthesis problem by first translating the specification into a universal coBüchi automaton, and then encoding acceptance of a transition system with bounded number of states into a constraint system, in this case a propositional satisfiability (SAT) problem. A solution to this SAT problem then represents a transition system that satisfies the original specification. To check for unrealizability of the specification, the existence of a winning strategy for the environment is also encoded into SAT.
In the basic configuration, a solution from the SAT solver is directly encoded into an AIGER circuit, and then handed to Yosys for simplification. As extensions, BoWSer implements bounded cycle synthesis [16], which restricts the structure of the solution with respect to the number of cycles in the transition system, as well as a third encoding that puts bounds on the number of gates and latches in the resulting AIGER circuit.
Compared to last year’s version, a number of small improvements to speed up computations have been implemented, and an experimental preprocessor for LTL formulas has been added, and is now used in configuration (synth), see below.
BoWSer competed in sequential and parallel variants of the following configurations:

configuration (basic) implements bounded synthesis in the basic version mentioned above,

configuration (synth) implements the same approach as (basic), except that it uses an additional preprocessor for LTL formulas to simplify the specification. bounded cycle synthesis on top of bounded synthesis,

configuration (opt) also implements bounded cycle synthesis on top of bounded synthesis, i.e., in a first step it searches for a solution with a bounded number of states, and if that exists, it additionally bounds the number of cycles.
In sequential mode, these configurations spawn multiple threads that are executed on a single CPU core. The parallel configurations are mostly the same as the sequential ones, but use a slightly different strategy for exploring the search space of solutions.
Implementation, Availability
BoWSer is implemented in Haskell, and uses Spot^{7}^{7}7Spot is available at https://spot.lrde.epita.fr. Accessed April 2019. to convert specifications into automata, and MapleSAT^{8}^{8}8MapleSAT is available at https://sites.google.com/a/gsd.uwaterloo.ca/maplesat/. Accessed April 2019. [31] to solve SAT queries. For circuit generation, it uses the Yosys framework^{9}^{9}9Yosys is available at http://www.clifford.at/yosys/. Accessed April 2019. [20]. The website of BoWSer is https://www.react.unisaarland.de/tools/bowser/.
Updated Tool: ltlsynt
ltlsynt was submitted by M. Colange and T. Michaud and competed in three different configurations in both the sequential realizability and sequential synthesis tracks.
To solve the synthesis problem, ltlsynt uses a translation to parity games. To increase efficiency, it uses an intermediate translation to automata with a transitionbased generalized Büchi acceptance condition and simplified based on heuristics. [13] To convert the automaton into a game, transitions are split into two separate actions, one action of the environment and one action of the controller, and the resulting automaton is finally determinized and can then be interpreted as a parity game (with transitionbased winning condition). To solve this parity game, ltlsynt uses the wellknown recursive algorithm by Zielonka [41], adapted to games with transitionbased winning condition. A winning strategy for the parity game defines a satisfying implementation of the controller in the synthesis problem. ltlsynt encodes the strategy into an AIGER circuit using an intermediate representation in Binary Decision Diagrams (BDDs), allowing some simplifications.
Compared to last year’s version, ltlsynt uses additional optimizations in the determinization and in Zielonka’s algorithm, and it now competes in three different configurations:

configuration (sd) implements splitting before determinization (as described above),

configuration (ds) implements splitting after determinization, and

configuration (incr) implements an incremental form of determinization.
Implementation, Availability
ltlsynt is implemented in C++ and integrated into a branch of the Spot automata library [13], which is used for translation of the specification into automata, and for manipulation of automata. Spot also integrates the BDD library BuDDy and the SAT solver PicoSAT.
The source code of ltlsynt is available in branch tm/ltlsyntpg of the GIT repository of Spot at https://gitlab.lrde.epita.fr/spot/spot.git.
Updated Tool: BoSy
BoSy was submitted by P. Faymonville, B. Finkbeiner and L. Tentrup, and competed in both the realizability and the synthesis track. To detect realizability, BoSy translates the LTL specification into a coBüchi automaton, and then to a safety automaton by bounding the number of visits to rejecting states. The resulting safety game is solved by SafetySynth. For synthesis, BoSy additionally implements bounded synthesis [18] with an encoding into quantified Boolean formulas (QBF) [14, 15]. To detect unrealizability, the existence of a winning strategy of the environment is encoded in a similar way and checked in parallel. The resulting QBF formulas are simplified using the QBF preprocessor bloqqer11 [6]. To solve the QBF constraints, BoSy uses a combination of CAQE [34, 38] and the certifying QBF solver QuAbS [37], and the certificate returned by QuAbS represents a solution to the synthesis problem. This solution is then converted into an AIGER circuit, and further simplified using the ABC framework. The resulting strategy (if any) is then compared to the solution found by SafetySynth, and the smaller one is returned.
Two configurations of BoSy competed in SYNTCOMP 2018: configuration (basic) and configuration (opt), where the latter further improves the size of the strategy by encoding the existence of an AIGER circuit representing the strategy directly into a QBF query. Both configurations support a parallel mode, if more than one core is available.
Implementation, Availability.
BoSy is written in Swift. It uses LTL3BA^{10}^{10}10LTL3BA is available at https://sourceforge.net/projects/ltl3ba/. Accessed April 2019. [4] or Spot to convert LTL specifications into Büchi automata. It uses bloqqer11^{11}^{11}11Bloqqer is available at http://fmv.jku.at/bloqqer. Accessed April 2019., CAQE^{12}^{12}12RAReQS is available at https://www.react.unisaarland.de/tools/caqe/. Accessed April 2019. and QuAbs^{13}^{13}13QuAbs is available at https://www.react.unisaarland.de/tools/quabs/. Accessed April 2019. to solve QBF constraints, and ABC to simplify solutions.
The code is available online at: https://www.react.unisaarland.de/tools/bosy/.
Updated Tool: sdfhoa (previously Party (aiger))
sdfhoa was submitted by A. Khalimov, and competed in the sequential realizability track in a single configuration. The tool is a C++ rewrite with minor optimizations of the aiger configuration of last year’s entrant Party (aiger) [29].
sdfhoa follows a variant of the bounded synthesis approach [18] and translates a given specification to a universal safety automaton that approximates liveness properties by bounded liveness. The transition relation of the safety automaton is translated into a symbolic representation with BDDs, and then treated as a safety game. The game is solved using the standard symbolic fixpoint algorithm.
Compared to the last year’s Party (aiger), sdfhoa removes the unnecessary step in the translation "automaton>aiger>bdd", which was a technical detail, and goes directly "automaton>bdd".
Implementation, Availability.
sdfhoa is written in C++. As input, it accepts a universal coBüchi automaton in HOA format, the output is a Yes/No answer and no circuit. For the competition, a wrapper script converted a given LTL formula into an automaton, using the Spot library.
The source code is available at: https://github.com/5nizza/sdfhoa.
5 Experimental Results
We present the results of SYNTCOMP 2018, separated into the safety (AIGER) track and the LTL (TLSF) track. As in previous years, both tracks are separated into realizability and synthesis subtracks, and parallel and sequential execution modes. Detailed results of the competition are also directly accessible via the web frontend of our instance of the EDACC platform at http://syntcomp.cs.unisaarland.de.
5.1 Safety (AIGER)Track: Realizability
In this track, tools competed in different configurations, in sequential execution mode and in parallel mode. We compare their performance on a selection of benchmark instances.
We first restrict the evaluation of results to purely sequential tools, then extend it to include also the parallel versions, and finally give a brief analysis of the results.
Sequential Mode.
In sequential mode, Simple BDD Solver competed with three configurations (basic, abs1, abs2), and the reentered tools each with their bestperforming configuration from last year: AbsSynthe (SC3 for realizability and SC2 for synthesis), Demiurge (D1real and D1synt), and SafetySynth (Basic).
The number of solved instances per configuration, as well as the number of uniquely solved instances, are given in Table 1. No tool could solve more than out of the instances, or about of the benchmark set. instances could not be solved by any tool within the timeout.
The configurations of LazySynt (running hors concours) solve fewer instances, but the (basic) configuration provides one unique solution, and another benchmark instance is solved by both configurations of LazySynt, but by none of the competing tools.
Tool  (configuration)  Solved  Unique 

Simple BDD Solver  (abs1)  165  1 
Simple BDD Solver  (basic)  161  1 
SafetySynth  (basic)  160  0 
Simple BDD Solver  (abs2)  159  4 
AbsSynthe  (SC3)  157  8 
Demiurge  (D1real)  125  18 
LazySynt  (genDel)  98  0 
LazySynt  (basic)  94  1 
Figure 1 gives a cactus plot for the runtimes of the best sequential configuration of each tool.
Parallel Mode.
Four tools had parallel configurations for the realizability track: three portfolio configurations of Simple BDD Solver (par1,par2,par3), and one configuration each for the reentered tools AbsSynthe (PC1), Demiurge (P3real), and TermiteSAT (hybrid). They ran on the same benchmarks as in the sequential mode, but runtime of tools was measured in wall time instead of CPU time. The results are given in Table 2. Compared to sequential mode, a number of additional instances could be solved: AbsSynthe, Simple BDD Solver and TermiteSAT all have one or more configurations that solve more then the best tool in sequential mode. The best result is solved instances, or about of the benchmark set. instances could not be solved by any parallel configuration, and instances could not be solved by any configuration, including the sequential ones.
Tool  (configuration)  Solved  Unique 

TermiteSAT  (hybrid)  179  0 
AbsSynthe  (PC1)  176  1 
Simple BDD Solver  (par2)  170  0 
Simple BDD Solver  (par3)  168  0 
Simple BDD Solver  (par1)  163  0 
Demiurge  (P3real)  160  4 
Note that in Table 2 we only count a benchmark instance as uniquely solved if it is not solved by any other configuration, including the sequential configurations.
Both modes: Solved Instances by Category.
Figure 3 gives an overview of the number of solved instances per configuration and category, for the best sequential and parallel configuration of each tool and different benchmark categories.
Analysis.
The best sequential configuration on this year’s benchmark set is again Simple BDD Solver (abs1), as in previous years. However, note that there are many unique solutions that are not found by this configuration, so the room for improvement should still be significant.
In particular, AbsSynthe (SC3) solves instances less than the best configuration, but also solves instances uniquely, and Demiurge (D1real) is instances behind, but can solve instances that no other sequential configuration solves. Figure 3 shows that Demiurge is much better than all other approaches in certain categories, like HWMCC, Load Balancer or gb (while it is far behind the other approaches in other categories, such as AMBA, Cycle Scheduling, Factory Assembly Line, and Moving Obstacle).
Considering parallel configurations, last year’s winner TermiteSAT (hybrid) again solves most instances, followed closely by AbsSynthe (PC1), like last year. The new portfolio configurations of Simple BDD Solver are between and instances behind the best configuration. Finally, the parallel configuration of Demiurge again comes close to the other parallel configurations, and solves instances that no other configuration, sequential or parallel, can solve.
5.2 Safety (AIGER)Track: Synthesis
In the track for synthesis from safety specifications in AIGER format, participants tried to solve all benchmarks that have been solved by at least one competing configuration in the realizability track. Three tools entered the track: AbsSynthe, Demiurge and SafetySynth, all of them in their respective best configurations for sequential and parallel mode from last year (SafetySynth does not have a separate parallel configuration).
Like last year, there are two different rankings in the synthesis track, based on the number of correct solutions and on their size compared to a reference implementation, respectively. As always, a solution for a realizable specification is only considered as correct if it can be verified (cf. Section 3). We present the results for the sequential configurations, followed by parallel configurations, and end with an analysis of the results.
Sequential Mode.
Table 3 summarizes the experimental results, including the number of solved benchmarks, the uniquely solved instances, the number of solutions that could not be modelchecked within the timeout, and the accumulated quality of solutions. No sequential configuration solved more than of the benchmarks, and instances could not be solved by any competing tool (not counting the configurations of LazySynt that ran hors concours).
Tool  (configuration)  Solved  Unique  MC Timeout  Quality 

SafetySynth  (basic)  154  17  0  224 
AbsSynthe  (SC2)  145  6  0  184 
Demiurge  (D1synt)  112  22  1  158 
LazySynt  (genDel)  94  0  4  137 
LazySynt  (basic)  91  2  4  125 
Parallel Mode.
Table 4 summarizes the experimental results, again including the number of solved benchmarks, the uniquely solved instances, the number of solutions that could not be verified within the timeout, and the accumulated quality of solutions. No tool solved more than problem instances. instances could not be solved by any parallel configuration, and instances could not be solved by any configuration, including the sequential ones.
Like in the parallel realizability track, we only consider instances as uniquely solved if they are not solved by any other configuration, including sequential ones. Moreover, the quality ranking is computed as if the sequential configuration had also participated (which makes a difference for benchmarks that do not have a reference solution).
Tool  (configuration)  Solved  Unique  MC Timeout  Quality 

AbsSynthe  (PC1)  156  0  0  204 
Demiurge  (P3Synt)  148  14  0  240 
Analysis.
As expected based on the results of previous years, SafetySynth and AbsSynthe compete for the highest number of solved instances, while Demiurge again wins the quality ranking and produces a very high number of unique solutions.
5.3 LTL (TLSF)Track: Realizability
In the track for realizability checking of LTL specifications in TLSF, tools competed in sequential and parallel configurations. In the following, we compare the results of these configurations on the benchmarks that were selected for SYNTCOMP 2018.
Again, we first restrict our evaluation to sequential configurations, then extend it to include parallel configurations, and finally give a brief analysis.
Sequential Mode.
In sequential mode, BoSy, BoWSer, sdfhoa and Strix each competed with one configuration, and ltlsynt with three configurations.
The number of solved instances per configuration, as well as the number of uniquely solved instances, are given in Table 5. No tool could solve more than out of the instances, or about of the benchmark set. instances could not be solved by any of the participants within the timeout.
Tool  (configuration)  Solved  Unique 

Strix  267  12  
BoSy  244  1  
sdfhoa  242  0  
ltlsynt  (ds)  239  0 
ltlsynt  (incr)  237  0 
ltlsynt  (sd)  233  0 
BoWSer  205  0 
Figure 4 gives a cactus plot of the runtimes for all sequential algorithms in the realizability track.
Parallel Mode.
BoSy, BoWSer and Strix also entered in a parallel configuration. Again, the parallel configurations run on the same set of benchmark instances as in sequential mode, but runtime is measured in wall time instead of CPU time. The results are given in Table 6. The best parallel configuration solves out of the instances, or about of the benchmark set. benchmarks have not been solved by any configuration.
Tool  (configuration)  Solved  Unique 

Strix  (par)  266  0 
BoSy  (par)  242  0 
BoWSer  (par)  212  0 
In Table 6, we again only count a benchmark instance as uniquely solved if it is not solved by any other sequential or parallel configuration. Therefore, none of the parallel configurations has a uniquely solved instance. Since the parallel configurations also don’t increase the number of benchmarks that can be solved, we omit a cactus plot.
Both modes: Solved Instances of Parameterized Benchmarks.
For both the sequential and the parallel configurations, Figure 5 gives an overview of the number of solved instances per configuration, for the parameterized benchmarks used in SYNTCOMP 2018.
Analysis.
Like last year, the entrants of SYNTCOMP 2018 are very diverse: BoSy, BoWSer and sdfhoa implement different variants of bounded synthesis, and ltlsynt and Strix use different encodings into parity games. Remarkable is the performance of the new entrant Strix, which beats by a significant margin the updated versions of the bestperforming configurations from last year, sdfhoa and ltlsynt. Moreover, BoSy has improved significantly, now solving the second highest number of instances, due to the new approach to determine realizability.
Regarding the different configurations of ltlsynt that are new this year, we note that the difference in the number of instances that can be solved is rather small, so the differences between these configurations do not seem to be crucial.
An analysis of the parameterized benchmarks in Figure 5 shows the different strengths of the approaches: Strix is almost always among the bestperforming tools, with the notable exception of the generalized_buffer benchmark. There, sdfhoa performs best. On a number of benchmarks, like amba_decomposed_arbiter or ltl2dba_E, the bounded synthesisbased approaches are clearly outperformed by the parity gamebased approaches, while on others BoSy and sdfhoa can at least beat ltlsynt, such as for ltl2dba_R or the detector and detector_unreal benchmarks (in addition to the aforementioned generalized_buffer, where they even beat Strix).
5.4 LTL (TLSF)Track: Synthesis
In the track for synthesis from LTL specifications in TLSF, participants were tested on the benchmark instances that were solved by at least one configuration in the LTL realizability track. Except for sdfhoa, all tools from the realizability track also competed in the synthesis track, and some of them with additional configurations: BoSy competed in two sequential and two parallel configurations, Strix in three sequential and one parallel configuration, and BoWSer in three sequential and three parallel configurations. ltlsynt competed in the same three configurations as in the realizability track. Additionally, LazySynt participated hors concours in
As for the safety synthesis track, there are two rankings in the LTL synthesis track, one based on the number of instances that can be solved, and the other based on the quality of solutions, measured by their size. Again, a solution for a realizable specification is only considered correct if it can be modelchecked within a separate timeout of one hour (cf. Section 3). In the following, we first present the results for the sequential configurations, then for the parallel configurations, and finally give an analysis of the results.
Sequential Mode.
Table 7 summarizes the experimental results for the sequential configurations, including the number of solved benchmarks, the uniquely solved instances, and the number of solutions that could not be modelchecked within the timeout. The last column gives the accumulated quality points over all correct solutions.
As before, the “solved” column gives the number of problems that have either been correctly determined unrealizable, or for which the tool has presented a solution that could be verified. Based on the instances that could be solved by at least one configuration in the realizability track, the bestperforming tool solved or about of the benchmarks, and instances could not be solved by any tool. None of the tools provided any wrong solutions.
Tool  (configuration)  Solved  Unique  MC Timeout  Quality 

Strix  (labels)  257  6  6  412 
Strix  (min)  248  0  16  416 
Strix  (basic)  247  0  17  387 
BoSy  (basic)  222  0  4  372 
ltlsynt  (incr)  214  0  19  257 
ltlsynt  (ds)  211  0  23  251 
ltlsynt  (sd)  210  0  23  245 
BoWSer  (simple)  206  0  0  308 
BoSy  (opt)  205  0  5  370 
BoWSer  (synth)  187  0  0  289 
BoWSer  (opt)  166  0  0  308 
Parallel Mode.
In this mode, BoSy and BoWSer competed with parallel versions of their configurations from the sequential track, and Strix competed in a portfolio approach that runs multiple configurations in parallel. Table 8 summarizes the experimental results, in the same format as before. No configuration solved more than problem instances, or about of the benchmark set. benchmarks could not be solved by any tool. None of the solutions were determined to be wrong.
As before, we only consider instances as uniquely solved if they are not solved by any other configuration, including sequential ones. Consequently, none of the solutions are unique.
Tool  (configuration)  Solved  Unique  MC Timeout  Quality 

Strix  (portfolio)  256  0  6  446 
BoSy  (opt,par)  223  0  9  402 
BoSy  (basic,par)  223  0  12  371 
BoWSer  (simple,par)  212  0  0  315 
BoWSer  (synth,par)  194  0  0  300 
BoWSer  (opt,par)  162  0  0  302 
Analysis.
As can be expected, the number of solved instances for each tool in synthesis is closely related to the solved instances in realizability checking. The number of unique instances of Strix and BoSy decrease, in part simply due to the fact that multiple configurations of the same tool now all solve the problem that was solved uniquely before. Furthermore, we note that all tools except BoWSer produce a significant number of solutions that could not be verified.
Considering the quality ranking, the bestperforming sequential configuration is not the one that solves the highest number of instances: even though Strix (min) solves instances less, it beats Strix (labels) in the quality ranking. In parallel mode, the portfolio approach of Strix manages to combine the best of both worlds and solves just one instance less than the best sequential configuration, while significantly improving the quality score.
Figure 6 plots the solution sizes of selected configurations. It shows, somewhat surprisingly, that many solution sizes of Strix in (min) or (portfolio) configurations are between the sizes of solutions produced by the bounded synthesis approaches implemented in BoSy and BoWSer. Only configuration BoWSer (opt) produces clearly smaller solutions (for roughly half of the realizable instances). In contrast, ltlsynt in most cases produces solutions that are bigger by some margin (i.e., roughly ten times the size of the solutions of BoSy).
A further analysis on the quality and the size of implementations shows that BoWSer (opt,par) is the configuration that has the highest average quality for the problems that it does solve. Furthermore, BoSy (opt,par) and Strix (portfolio) each produce solutions that are better than the previous reference solution. The analysis for all tools is given in Table 9.
Tool  (configuration)  Avg. Quality  New Ref. Solutions 

BoWSer  (opt,par)  1.862  34 
BoWSer  (opt)  1.848  33 
BoSy  (opt,par)  1.802  54 
BoSy  (opt)  1.794  48 
Strix  (portfolio)  1.741  54 
Strix  (min)  1.666  49 
BoSy  (basic) & (basic,par)  1.661  39 
Strix  (labels)  1.595  45 
Strix  (basic)  1.555  46 
BoWSer  (synth,par)  1.544  3 
BoWSer  (synth)  1.534  2 
BoWSer  (simple) & (simple,par)  1.484  4 
ltlsynt  (incr)  1.206  8 
ltlsynt  (ds)  1.173  7 
ltlsynt  (sd)  1.171  8 
6 Conclusions
SYNTCOMP 2018 was the fifth iteration of the reactive synthesis competition, and showed that there is still substantial progress from year to year. While the safety track saw only minor updates and one new tool that was entered hors concours, the LTL track again saw major changes, including the new tool Strix that won all official categories of the track.
In addition, also the benchmark set of the LTL track has grown significantly, and this year for the first time we did not release all benchmarks beforehand.
Acknowledgments. The organization of SYNTCOMP 2018 was supported by the Austrian Science Fund (FWF) through project RiSE (S11406N23) and by the German Research Foundation (DFG) through project “Automatic Synthesis of Distributed and Parameterized Systems” (JA 2357/21), and its setup and execution by the European Research Council (ERC) Grant OSARES (No. 683300).
The development of SafetySynth and BoSy was supported by the ERC Grant OSARES (No. 683300).
The development of Strix was supported by the German Research Foundation (DFG) projects “Gamebased Synthesis for Industrial Automation” (253384115) and “Verified Model Checkers” (317422601) and by the ERC Advanced Grant PaVeS (No. 787367).
References
 [1]
 [2] Andreas Abel & Jan Reineke (2015): MeMin: SATbased Exact Minimization of Incompletely Specified Mealy Machines. In: Proceedings of the IEEE/ACM International Conference on ComputerAided Design, ICCAD 2015, Austin, TX, USA, November 26, 2015, pp. 94–101, doi:10.1109/ICCAD.2015.7372555. Available at http://dx.doi.org/10.1109/ICCAD.2015.7372555.
 [3] Luca de Alfaro & Pritam Roy (2010): Solving games via threevalued abstraction refinement. Inf. Comput. 208(6), pp. 666–676, doi:10.1016/j.ic.2009.05.007.
 [4] Tomás Babiak, Mojmír Kretínský, Vojtech Rehák & Jan Strejcek (2012): LTL to Büchi Automata Translation: Fast and More Deterministic. In: TACAS, LNCS 7214, Springer, pp. 95–109, doi:10.1007/9783642287565_8.
 [5] Adrian Balint, Daniel Diepold, Daniel Gall, Simon Gerber, Gregor Kapler & Robert Retz (2011): EDACC  An Advanced Platform for the Experiment Design, Administration and Analysis of Empirical Algorithms. In: LION 5. Selected Papers, LNCS 6683, Springer, pp. 586–599, doi:10.1007/9783642255663_46.
 [6] Armin Biere, Florian Lonsing & Martina Seidl (2011): Blocked Clause Elimination for QBF. In: CADE, Lecture Notes in Computer Science 6803, Springer, pp. 101–115.
 [7] Henrik Björklund, Sven Sandberg & Sergei G. Vorobyov (2004): A Combinatorial Strongly Subexponential Strategy Improvement Algorithm for Mean Payoff Games. In: Mathematical Foundations of Computer Science 2004, 29th International Symposium, MFCS 2004, Lecture Notes in Computer Science 3153, Springer, pp. 673–685, doi:10.1007/9783540286295_52. Available at https://doi.org/10.1007/9783540286295_52.
 [8] R. Bloem, R. Könighofer & M. Seidl (2014): SATBased Synthesis Methods for Safety Specs. In: VMCAI, LNCS 8318, Springer, pp. 1–20, doi:10.1007/9783642540134_1.
 [9] Robert K. Brayton & Alan Mishchenko (2010): ABC: An Academic IndustrialStrength Verification Tool. In: CAV, Lecture Notes in Computer Science 6174, Springer, pp. 24–40, doi:10.1007/9783642142956_5. Available at https://doi.org/10.1007/9783642142956_5.
 [10] Romain Brenguier, Guillermo A. Pérez, JeanFrançois Raskin & Ocan Sankur (2014): AbsSynthe: abstract synthesis from succinct safety specifications. In: SYNT, EPTCS 157, Open Publishing Association, pp. 100–116, doi:10.4204/EPTCS.157.11.
 [11] Romain Brenguier, Guillermo A. Pérez, JeanFrançois Raskin & Ocan Sankur (2015): Compositional Algorithms for Succinct Safety Games. In: SYNT, EPTCS 202, Open Publishing Association, pp. 98–111, doi:10.4204/EPTCS.202.7.
 [12] Alonzo Church (1962): Logic, arithmetic and automata. In: Proceedings of the international congress of mathematicians, pp. 23–35.
 [13] Alexandre DuretLutz, Alexandre Lewkowicz, Amaury Fauchille, Thibaud Michaud, Etienne Renault & Laurent Xu (2016): Spot 2.0 — a framework for LTL and automata manipulation. In: Proceedings of the 14th International Symposium on Automated Technology for Verification and Analysis (ATVA’16), LNCS 9938, Springer, pp. 122–129, doi:10.1007/9783319465203_8.
 [14] Peter Faymonville, Bernd Finkbeiner, Markus N. Rabe & Leander Tentrup (2017): Encodings of Bounded Synthesis. In: TACAS (1), LNCS 10205, pp. 354–370, doi:10.1007/9783662545775_20.
 [15] Peter Faymonville, Bernd Finkbeiner & Leander Tentrup (2017): BoSy: An Experimentation Framework for Bounded Synthesis. In: CAV (2), LNCS 10427, Springer, pp. 325–332, doi:10.1007/9783319633909_17.
 [16] Bernd Finkbeiner & Felix Klein (2016): Bounded Cycle Synthesis. In: CAV (1), LNCS 9779, Springer, pp. 118–135, doi:10.1007/9783319415284_7.
 [17] Bernd Finkbeiner, Felix Klein, Ruzica Piskac & Mark Santolucito (2017): Synthesizing Functional Reactive Programs. CoRR abs/1712.00246.
 [18] Bernd Finkbeiner & Sven Schewe (2013): Bounded synthesis. STTT 15(56), pp. 519–539, doi:10.1007/s100090120228z.
 [19] S. Gélineau (2016): FRP Zoo  comparing many FRP implementations by reimplementing the same toy app in each. Available at https://github.com/gelisam/frpzoo.
 [20] Johann Glaser & Clifford Wolf (2014): Methodology and ExampleDriven Interconnect Synthesis for Designing Heterogeneous CoarseGrain Reconfigurable Architectures, pp. 201–221. Springer International Publishing, Cham, doi:10.1007/9783319014180_12.
 [21] Swen Jacobs (2014): Extended AIGER Format for Synthesis. CoRR abs/1405.5793. Available at http://arxiv.org/abs/1405.5793.
 [22] Swen Jacobs, Nicolas Basset, Roderick Bloem, Romain Brenguier, Maximilien Colange, Peter Faymonville, Bernd Finkbeiner, Ayrat Khalimov, Felix Klein, Thibaud Michaud, Guillermo A. Pérez, JeanFrançois Raskin, Ocan Sankur & Leander Tentrup (2017): The 4th Reactive Synthesis Competition (SYNTCOMP 2017): Benchmarks, Participants & Results. In: SYNT@CAV, EPTCS 260, pp. 116–143, doi:10.4204/EPTCS.260.10. Available at https://doi.org/10.4204/EPTCS.260.10.
 [23] Swen Jacobs & Roderick Bloem (2016): The Reactive Synthesis Competition: SYNTCOMP 2016 and Beyond. In: SYNT@CAV 2016, EPTCS 229, pp. 133–148, doi:10.4204/EPTCS.229.11.
 [24] Swen Jacobs, Roderick Bloem, Romain Brenguier, Rüdiger Ehlers, Timotheus Hell, Robert Könighofer, Guillermo A. Pérez, JeanFrançois Raskin, Leonid Ryzhyk, Ocan Sankur, Martina Seidl, Leander Tentrup & Adam Walker (2017): The first reactive synthesis competition (SYNTCOMP 2014). STTT 19(3), pp. 367–390, doi:10.1007/s1000901604163.
 [25] Swen Jacobs, Roderick Bloem, Romain Brenguier, Ayrat Khalimov, Felix Klein, Robert Könighofer, Jens Kreber, Alexander Legg, Nina Narodytska, Guillermo A. Pérez, JeanFrançois Raskin, Leonid Ryzhyk, Ocan Sankur, Martina Seidl, Leander Tentrup & Adam Walker (2016): The 3rd Reactive Synthesis Competition (SYNTCOMP 2016): Benchmarks, Participants & Results. In: SYNT@CAV, EPTCS 229, pp. 149–177, doi:10.4204/EPTCS.229.12.
 [26] Swen Jacobs, Roderick Bloem, Romain Brenguier, Robert Könighofer, Guillermo A. Pérez, JeanFrançois Raskin, Leonid Ryzhyk, Ocan Sankur, Martina Seidl, Leander Tentrup & Adam Walker (2016): The Second Reactive Synthesis Competition (SYNTCOMP 2015). In: SYNT, EPTCS 202, Open Publishing Association, pp. 27–57, doi:10.4204/EPTCS.202.4.
 [27] Swen Jacobs, Felix Klein & Sebastian Schirmer (2016): A HighLevel LTL Synthesis Format: TLSF v1.1. In: SYNT@CAV 2016, EPTCS 229, pp. 112–132, doi:10.4204/EPTCS.229.10.
 [28] Swen Jacobs & Mouhammad Sakr (2018): A Symbolic Algorithm for Lazy Synthesis of Eager Strategies. In: ATVA 2018, Lecture Notes in Computer Science 11138, Springer, pp. 211–227, doi:10.1007/9783030010904_13. Available at https://doi.org/10.1007/9783030010904_13.
 [29] Ayrat Khalimov, Swen Jacobs & Roderick Bloem (2013): PARTY Parameterized Synthesis of Token Rings. In: CAV, LNCS 8044, Springer, pp. 928–933, doi:10.1007/9783642397998_66.
 [30] Alexander Legg, Nina Narodytska & Leonid Ryzhyk (2016): A SATBased Counterexample Guided Method for Unbounded Synthesis. In: CAV (2), LNCS 9780, Springer, pp. 364–382, doi:10.1007/9783319415406_20.
 [31] Jia Hui Liang, Vijay Ganesh, Pascal Poupart & Krzysztof Czarnecki (2016): Learning Rate Based Branching Heuristic for SAT Solvers. In Nadia Creignou & Daniel Le Berre, editors: Theory and Applications of Satisfiability Testing  SAT 2016  19th International Conference, Bordeaux, France, July 58, 2016, Proceedings, Lecture Notes in Computer Science 9710, Springer, pp. 123–140, doi:10.1007/9783319409702_9.
 [32] Michael Luttenberger (2008): Strategy Iteration using NonDeterministic Strategies for Solving Parity Games. CoRR abs/0806.2923. Available at http://arxiv.org/abs/0806.2923.
 [33] Petter Nilsson, Necmiye Ozay & Jun Liu (2017): Augmented finite transition systems as abstractions for control synthesis. Discrete Event Dynamic Systems 27(2), pp. 301–340.
 [34] Markus N. Rabe & Leander Tentrup (2015): CAQE: A Certifying QBF Solver. In: FMCAD, IEEE, pp. 136–143.
 [35] Sven Schewe (2008): An Optimal Strategy Improvement Algorithm for Solving Parity and Payoff Games. In: Computer Science Logic, 22nd International Workshop, CSL 2008, Lecture Notes in Computer Science 5213, Springer, pp. 369–384, doi:10.1007/9783540875314_27. Available at https://doi.org/10.1007/9783540875314_27.
 [36] M. Seidl & R. Könighofer (2014): Partial witnesses from preprocessed quantified Boolean formulas. In: DATE’14, IEEE, pp. 1–6, doi:10.7873/DATE2014.162.
 [37] Leander Tentrup (2016): Solving QBF by Abstraction. CoRR abs/1604.06752. Available at http://arxiv.org/abs/1604.06752.
 [38] Leander Tentrup (2017): On Expansion and Resolution in CEGAR Based QBF Solving. In: CAV (2), Lecture Notes in Computer Science 10427, Springer, pp. 475–494.
 [39] ChengYin Wu, ChiAn Wu, ChienYu Lai & ChungYang R. Haung (2014): A CounterexampleGuided Interpolant Generation Algorithm for SATBased Model Checking. IEEE Trans. on CAD of Integrated Circuits and Systems 33(12), pp. 1846–1858, doi:10.1109/TCAD.2014.2363395.
 [40] Bernhard Wymann, Eric Espié, Christophe Guionneau, Christos Dimitrakakis, Rémi Coulom & Andrew Sumner (2015): Torcs, the open racing car simulator. Software available at http://torcs.sourceforge.net.
 [41] Wieslaw Zielonka (1998): Infinite Games on Finitely Coloured Graphs with Applications to Automata on Infinite Trees. Theor. Comput. Sci. 200(12), pp. 135–183, doi:10.1016/S03043975(98)000097.