Improving SAT-solving with Machine Learning

Improving SAT-solving with Machine Learning

30 July 1999

In this project, we aimed to improve the runtime of Minisat, a Conflict-Driven Clause Learning (CDCL) solver that solves the Propositional Boolean Satisfiability (SAT) problem. We first used a logistic regression model to predict the satisfiability of propositional boolean formulae after fixing the values of a certain fraction of the variables in each formula. We then applied the logistic model and added a preprocessing period to Minisat to determine the preferable initial value (either true or false) of each boolean variable using a Monte-Carlo approach. Concretely, for each Monte-Carlo trial, we fixed the values of a certain ratio of randomly selected variables, and calculated the confidence that the resulting sub-formula is satisfiable with our logistic regression model. The initial value of each variable was set based on the mean confidence scores of the trials that started from the literals of that variable. We were particularly interested in setting the initial values of the backbone variables correctly, which are variables that have the same value in all solutions of a SAT formula. Our Monte-Carlo method was able to set 78 of the backbones correctly. Excluding the preprocessing time, compared with the default setting of Minisat, the runtime of Minisat for satisfiable formulae decreased by . However, our method did not outperform vanilla Minisat in runtime, as the decrease in the conflicts was outweighed by the long runtime of the preprocessing period.

Improving SAT-solving with Machine Learning

Haoze Wu
Department of Mathematics and Computer Science
Davidson College
Davidson, NC, USA
Raghuram Ramanujan
Davidson College
P.O. Box 5996, 209 Ridge Rd
Davidson, NC, USA



  • 3SAT; Satisfiability; Logistic; Monte-Carlo; Machine Learning

    In the Propositional Boolean Satisfiability (SAT) problem, one is given a Boolean formula, i.e., an expression that consists of Boolean variables connected by the fundamental Boolean operators "and", "or" and "not". One is then tasked with determining whether there is an assignment of true/false values to the variables such that the overall formula evaluates to true. For instance, , is a Boolean formula with three Boolean variables. This formula evaluates to true when , , and are all set to true. Most state-of-the-art complete SAT solvers use the Conflict-Driven Clause Learning (CDCL) algorithm. Though able to solve large industrial formulae, the CDCL algorithm cannot efficiently solve random formulae of even moderate size (300-500 variables). Our goal is to improve the runtime of Minisat, a CDCL SAT solver, on random Boolean formulae, with machine learning.

    The SAT problem is one of the most studied NP-complete problems because of its theoretical significance and practical applications[?]. Our work is inspired by the works of Xu, et al. who successfully applied machine learning to SAT problems. In [?], they used a machine learning algorithm to create an ensemble SAT-solver, SATzilla, which uses several SAT-solvers as subroutines and makes a per-instance decision on which solver to pick for a given input formula. In another work [?], Xu, et al used machine learning to predict the satisfiability of hard Boolean formulae and obtained 70% accuracy. In our project, instead of using machine learning to optimize the selection of SAT-solvers, we aimed to exploit the learnability of satisfiability to improve the performance of a particular solver, Minisat [?]. Minisat uses the CDCL algorithm, which selects a variable from the formula to "branch on", fixes its value to either true or false, simplifies the formula with this assignment, and then recursively solves the rest of the formula. When a conflict (i.e., a contradiction) is detected at a certain branching level, the algorithm backtracks. Though Minisat uses a heuristic to optimize the selection of the branching variable[?], it does not make intelligent choices when assigning values to these variables. We aim to use machine learning to set the default branching values of variables before running the CDCL algorithm, in order to decrease the number of conflicts, and thus decrease the runtime of Minisat.

    Our project relied on the hypothesis that if at each branching point, the branching variable is set to the value that is more likely to lead to a solution, then a solution would be found relatively quickly.

    We first trained a logistic regression model to predict the satisfiability of random Boolean formulae represented in conjunctive normal form(CNF). A CNF formula is composed of a conjunction of clauses, where each clause is the disjunction of variables (or their negations). A 3-CNF formula is one in which each clause contains exactly three variables (or their negations) — for example, . There is a well-known polynomial-time procedure for converting arbitrary SAT formulae into 3-CNF form[?], so this restriction in clause length does not compromise generality.

    We created the training data for our learning model by first generating random 3-CNF formulae with 300 variables. Since we planned to apply our fitted model to make predictions about the formulae that are generated by the CDCL process, we generated additional instances by randomly fixing of the variables in these formulae, for varying values of . We extracted 10 features from each of these formulae to create individual training examples — these features are listed in Table 1. The selection of the features was inspired by [?]. The target variable is a binary value that indicated whether the formula in question was satisfiable. Building this regression model constitutes the offline phase of our new solver.

    1 Clause to variable ratio
    2 Fraction of binary clauses
    3 Fraction of horn clauses
    4 POSNEG_ratio_var_max
    5 POSNEG_ratio_var_min
    6 POSNEG_ratio_var_mean
    7 POSNEG_ratio_var_std
    8 POSNEG_ratio_var_variation
    9 LPSLACK_mean
    10 LPSLACK_coeff_variation
    Table \thetable: We refer the reader to [?] for the definitions of features 4-10

    In the online phase of the solver, we added a preprocessing period to Minisat to determine the preferred initial value of each Boolean variable with a Monte-Carlo approach. Concretely, to determine the preferred initial value of a variable , we first set to true and conducted a Monte-Carlo simulation starting from the remaining (simplified) formula. In each Monte-Carlo trial, we randomly fixed the values of of the variables, and calculated the likelihood that the resulting formula was satisfiable, with our logistic regression model. A mean likelihood score was computed by averaging the scored from the outcomes of many trials. We repeated the same operations with set to false. We then set to the value (i.e., true or false) that suggested a higher chance of leading to a satisfiable solution, based on the outcome of the Monte-Carlo simulations. We were particularly interested to gauge the effectiveness of our model in correctly setting the values of so-called "backbone variables" — those variables that have the same value in all solutions of a SAT formula [?].

    We evaluated the performance of the preprocessing period by the ratio of backbones it set correctly, the number of conflicts the subsequent CDCL run yielded, and the actual runtime of Minisat. Since our method was geared towards finding solutions quickly, we focused our attention on satisfiable Boolean formulae.

    As shown by Table 2, our regression model achieved an accuracy of in predicting satisfiability when was , and an accuracy of when was set to . The best performance of the Monte-Carlo method was obtained when was set to . On average, it set of the backbones correctly.

    SAT Prediction Score Backbone setting Score
    4% 77.94% 77.56%
    2% 68.56% 77.56%
    0% 70.38% 68.00%
    Table \thetable: The performance of SAT prediction and backbone setting with different depths of Monte-Carlo trails

    Compared with the default setting of Minisat (which always sets the branching variable to false), our method yielded an average decrease in conflicts of and outperformed default Minisat in terms of conflicts in of the test cases. However, it did not outperform vanilla Minisat in runtime, as the decrease in the conflicts was outweighed by the long runtime of the preprocessing period.

    • [1] N. Eén and N. Sörensson. An extensible sat-solver. pages 502–518, May 2003.
    • [2] P. Kilby, J. Slaney, S. Thiébaux, T. Walsh, et al. Backbones and backdoors in satisfiability. 2005.
    • [3] T. J. Schaefer. The complexity of satisfiability problems. In Proceedings of the tenth annual ACM symposium on Theory of computing, pages 216–226. ACM, 1978.
    • [4] L. Xu, H. H. Hoos, and K. Leyton-Brown. Predicting satisfiability at the phase transition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, AAAI’12, pages 584–590. AAAI Press, 2012.
    • [5] L. Xu, F. Hutter, H. H. Hoos, and K. Leyton-Brown. Satzilla: portfolio-based algorithm selection for sat. Journal of Artificial Intelligence Research, 32:565–606, 2008.
    • [6] L. Zhang, C. F. Madigan, M. H. Moskewicz, and S. Malik. Efficient conflict driven learning in a boolean satisfiability solver. In Proceedings of the 2001 IEEE/ACM international conference on Computer-aided design, pages 279–285. IEEE Press, 2001.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description