On HashingBased Approaches to Approximate DNFCounting^{1}^{1}1The author list has been sorted alphabetically by last name; this should not be used to determine the extent of authors’ contributions
Abstract
Propositional model counting is a fundamental problem in artificial intelligence with a wide variety of applications, such as probabilistic inference, decision making under uncertainty, and probabilistic databases. Consequently, the problem is of theoretical as well as practical interest. When the constraints are expressed as DNF formulas, Monte Carlobased techniques have been shown to provide a fully polynomial randomized approximation scheme (FPRAS). For CNF constraints, hashingbased approximation techniques have been demonstrated to be highly successful. Furthermore, it was shown that hashingbased techniques also yield an FPRAS for DNF counting without usage of Monte Carlo sampling. Our analysis, however, shows that the proposed hashingbased approach to DNF counting provides poor time complexity compared to the Monte Carlobased DNF counting techniques. Given the success of hashingbased techniques for CNF constraints, it is natural to ask: Can hashingbased techniques provide an efficient FPRAS for DNF counting? In this paper, we provide a positive answer to this question. To this end, we introduce two novel algorithmic techniques: Symbolic Hashing and Stochastic Cell Counting, along with a new hash family of RowEchelon hash functions. These innovations allow us to design a hashingbased FPRAS for DNF counting of similar complexity (up to polylog factors) as that of prior works. Furthermore, we expect these techniques to have potential applications beyond DNF counting.
Kuldeep S. Meel, Aditya A. Shrotri and Moshe Y. Vardi\subjclassG.1.2 Special Function Approximation, F.4.1 Logic and Constraint Programming
1 Introduction
Propositional model counting is a fundamental problem in artificial intelligence with a wide range of applications including probabilistic inference, databases, decision making under uncertainty, and the like [1, 6, 7, 24]. Given a Boolean formula , the problem of propositional model counting , also referred to as #SAT, is to compute the number of solutions of [29]. Depending on whether is expressed as a CNF or DNF formula, the corresponding model counting problems are denoted as #CNF or #DNF, respectively. Both #CNF and #DNF have a wide variety of applications. For example, probabilisticinference queries reduce to solving #CNF instances [1, 21, 22, 24], while evaluation of queries for probabilistic database reduce to #DNF instances [6]. Consequently, both #CNF and #DNF have been of theoretical as well as practical interest over the years [16, 18, 23, 25]. In his seminal paper, Valiant [29] showed that both #CNF and #DNF are #Pcomplete, a class of problems that are believed to be intractable in general.
Given the intractability of #CNF and #DNF, much of the interest lies in the approximate variants of #CNF and #DNF, wherein for given tolerance and confidence parameters and , the goal is to compute an estimate such that is within a multiplicative factor of the true count with confidence at least . While both #CNF and #DNF are #Pcomplete in their exact forms, the approximate variants differ in complexity: approximating #DNF can be accomplished in fully polynomial randomized time [5, 17, 18], but approximate #CNF is NPhard [25]. Consequently, different techniques have emerged to design scalable approximation techniques for #DNF and #CNF.
In the context of #DNF, the works of Karp, Luby, and Madras [17, 18] led to the development of highly efficient MonteCarlo based techniques, whose time complexity is linear in the size of the formula. On the other hand, hashingbased techniques have emerged as a scalable approach to the approximate model counting of CNF formulas [3, 4, 10, 13, 25], and are effective even for problems with existing FPRAS such as network reliability [8]. These hashingbased techniques employ 2universal hash functions to partition the space of satisfying solutions of a CNF formula into cells such that a randomly chosen cell contains only a small number of solutions. Furthermore, it is shown that the number of solutions across the cells is roughly equal and, therefore, an estimate of the total count can be obtained by counting the number of solutions in a cell and scaling the obtained count by the number of cells. Since the problem of counting the number of solutions in a cell when the number of solutions is small can be accomplished efficiently by invoking a solver, the hashingbased techniques can take advantage of the recent progress in the development of efficient solvers. Consequently, algorithms such as [3, 4] have been shown to scale to instances with hundreds of thousands of variables.
While Monte Carlo techniques introduced in the works of Karp et al. have shown to not be applicable in the context of approximate #CNF [18], it was not known whether hashingbased techniques could be employed to obtain efficient algorithms for #DNF. Recently, significant progress in this direction was achieved by Chakraborty, Meel and Vardi [4], who showed that hashingbased framework of could be employed to obtain FPRAS for #DNF counting^{2}^{2}2It is worth noting that several hashingbased algorithms based on [10, 28] do not lead to FPRAS schemes for #DNF despite close similarity to Chakraborty et al.’s approach. There is, however, no precise complexity analysis in [4]. In this paper, we provide a complexity analysis of the proposed scheme of Chakraborty et al., which is worse than quartic in the size of formula. In comparison, stateof theart approaches achieve complexity linear in the number of variables and cubes for #DNF counting. This begs the question: How powerful is the hashingbased framework in the context of DNF counting? In particular, can it lead to algorithms competitive in runtime complexity with stateoftheart?
In this paper, we provide a positive answer to this question. To achieve such a significant reduction in complexity, we offer three novel algorithmic techniques: (i) A new class of 2universal hash functions that enable fast enumeration of solutions using Gray Codes, (ii) Symbolic Hashing, and (iii) Stochastic Cell Counting. These techniques allow us to achieve the complexity of , which is within polylog factors of the complexity achieved by Karp et al. [18]. Here, and are the number of cubes and variables respectively while and are the tolerance and confidence of approximation. Furthermore, we believe that these techniques are not restricted to #DNF. Given recent breakthroughs achieved in the development of hashingbased CNFcounting techniques, we believe our techniques have the potential for a wide variety of applications.
2 Preliminaries
DNF Formulas and Counting
We use Greek letters , and to denote boolean formulas. A formula over boolean variables is in Disjunctive Normal Form (DNF) if it is a disjunction over conjunctions of variables or their negations. We use to denote the set of variables appearing in the formula. Each occurrence of a variable or its negation is called a literal. Disjuncts in the formula are called cubes and we denote the cube by . Thus where each is a conjunction of literals. We will use and to denote the number of variables and number of cubes in the input DNF formula, respectively. The number of literals in a cube is called its width and is denoted by .
An assignment to all the variables can be represented by a vector with corresponding to and to . is the set of all possible assignments, which we refer to as the universe or state space interchangeably. An assignment is called a satisfying assignment for a formula if evaluates to under . In other words satisfies and is denoted as . Note that an assignment will satisfy a DNF formula if for some . The DNFCounting Problem is to count the number of satisfying assignments of a DNF formula.
Next, we formalize the concept of a counting problem. Let be a relation which is decidable in polynomial time and there is a polynomial such that for every we have . The decision problem corresponding to asks if for a given there exists a such that . Such a problem is in NP. Here, is a called the problem instance and is called the witness. We denote the set of all witnesses for a given by . The counting problem corresponding to is to calculate the size of the witness set for a given . Such a problem is in [29]. The DNFCounting problem is an example of this formalism: A formula is a problem instance and a satisfying assignment is a witness of . The set of satisfying assignments or the solution space is denoted and the goal is to compute . It is known that the problem is Complete, which is believed to be intractable [27]. Therefore, we look at what it means to efficiently and accurately approximate this problem.
A fully polynomial randomized approximation scheme (FPRAS) is a randomized algorithm that takes as input a problem instance , a tolerance and confidence parameter and outputs a random variable such that and the running time of the algorithm is polynomial in , , [17]. Notably, while exact DNFcounting is interreducible with exact CNFcounting, the approximate versions of the two problems are not because multiplicative approximation is not closed under complementation.
Matrix Notation
We use , to denote scalar variables. We use subscripts as required. In this paper we are dealing with operations over the boolean ring, where the variables are boolean, ’addition’ is the XOR operation () and ’multiplication’ is the AND operation (). We use the letters ,, as indices or to denote positions. We denote sets by nonboldface capital letters. We use capital boldface letters , to denote matrices, small boldface letters , , to denote vectors. denotes a matrix of rows and columns, while denotes a vector of length . and are the all 0s and all 1s vectors of length , respectively. We omit the dimensions when clear from context. denotes the element of , while denotes the element in the th row and column of . denotes the submatrix of between rows and excluding and columns and excluding . Similarly denotes the subvector of between index and index excluding . The row of is denoted and column as . The matrix formed by concatenating rows of matrices and is written in block notation as , while represents concatenation of columns. Similarly the length concatenation of vectors and is . The dot product between matrix and vector is written as . The vector formed by elementwise XOR of vectors and is denoted .
Hash Functions
A hash function partitions the elements of of the domain into cells. implies that maps the assignment to the cell . is the set of assignments that map to the cell . In the context of counting, 2universal families of hash functions, denoted by , are of particular importance. When is sampled uniformly at random from , 2universality entails

for all

for every and .
Of particular interest is the random XOR family of hash functions, which is defined as . Selecting and s randomly from is equivalent to drawing uniformly at random from this family. A pair and now defines a hash function as follows: . This family was shown to be 2universal in [2]. For a hash function , we have that is a system of linear equations modulo 2: . From another perspective, it can be viewed as a boolean formula . The solutions to this formula are exactly the elements of the set .
Gaussian Elimination
Solving a system of linear equations over variables and constraints can be done by row reduction technique known variously as Gaussian Elimination or GaussJordan Elimination. A matrix is in RowEchelon form if rows with at least one nonzero element are above any rows of all zeros. The matrix is in Reduced RowEchelon form if, in addition, every leading nonzero element in a row is 1 and is the only nonzero entry in its column. We refer to the technique for obtaining the Reduced RowEchelon form of a matrix as Gaussian Elimination. We refer the reader to any standard text on linear algebra (cf., [26]) for details. For a matrix in Reduced RowEchelon form, the rowrank is simply the number of nonzero rows.
For a system of linear equations , if the rowrank of the augmented matrix is same as rowrank of , then the system is consistent and the number of solutions is where is the number of variables in the system of equations. Moreover, if is in Reduced RowEchelon form, then the values of the variables corresponding to leading s in each row are completely determined by the values assigned to the remaining variables. The variables corresponding to the leading 1’s are called dependent variables and the remaining variables are free. Let and denote the set of free and dependent variables respectively. Let . Clearly . For each possible assignment to the free variables we get an assignment to the dependent variables by propagating the values through the augmented matrix in time. Thus we can enumerate all satisfying assignments to a system of linear equations if in Reduced RowEchelon form.
Gray Codes
A Gray code [14] is an ordering of binary numbers for some with the property that every pair of consecutive numbers in the sequence differ in exactly one bit. Thus starting from we can iteratively construct the entire Gray code sequence by flipping one bit in each step. We assume access to a procedure that in each call returns the position of the next bit that is to be flipped. Such a procedure can be implemented in constant time by a trivial modification of Algorithm L in [19].
3 Related Work
Propositional model counting has been of theoretical as well as practical interest over the years [16, 17, 23, 27]. Early investigations showed that both #CNF and #DNF are #Pcomplete [29]. Consequently, approximation algorithms have been explored for both problems. A major breakthrough for approximate #DNF was achieved by the seminal work of Karp and Luby [17], which provided a Monte Carlobased FPRAS for #DNF. The proposed FPRAS was improved by followup work of Karp, Luby and Madras [18] and Dagum et al. [5], achieving the best known complexity of . In this work, we bring certain ideas of Karp et al. into the hashing framework with significant adaptations.
For #CNF, early work on approximate counting resulted in hashingbased schemes that required polynomially many calls to an NPoracle [25, 28]. No practical algorithms materialized from the these schemes due to the impracticality of the underlying NP queries. Subsequent attempts to circumvent hardness led to the development of several hashing and samplingbased approaches that achieved scalability but provided very weak or no guarantees [13, 11]. Due to recent breakthroughs in the design of hashingbased techniques, several tools have been developed recently that can handle formulas involving hundreds of thousands of variables while providing rigorous formal guarantees. Overall, these tools can be broadly classified by their underlying hashingbased technique as: (i) obtain a constant factor approximation and then use identical copies of the input formula to obtain approximations [10], or (ii) directly obtain guarantees[3, 4]. The first technique when applied to DNF formulas is not an FPRAS. In contrast, Chakraborty, Meel and Vardi [4] recently showed that tools based on the latter approach, such as , do provide FPRAS for #DNF counting. Chakraborty et al. did not analyze the complexity of the algorithm in their work. We now provide a precise complexity analysis of for #DNF. To that end, we first describe the framework on which is built.
3.1 ApproxMC Framework
Chakraborty et al. introduced in [3] a hashingbased framework called that requires linear (in ) number of calls. Subsequently in , the number of calls was reduced from linear to logarithmic (in ). The core idea of is to employ universal hash functions to partition the solution space into roughly equal small cells, wherein a cell is called small if it has less than or equal to solutions, such that is a function of . A solver is employed to check if a cell is small by enumerating solutions onebyone until either there are no more solutions or we have already enumerated solutions. Following the terminology of [3], we refer to the above described procedure as (bounded SAT). To determine the number of cells, performs a search that requires steps and the estimate is returned as the count of the solutions in a randomly picked small cell scaled by the total number of cells. To amplify confidence to the desired levels of , invokes the estimation routine times and reports the median of all such estimates. Hence, the number of invocations is .
FPRAS for #DNF
The key insight of Chakraborty et al. [4] is that the procedure can be done in polynomial time when the input formula to is in DNF. In particular, the input to every invocation of is a formula that is a conjunction of the input DNF formula and a set of XOR constraints derived from the hash function. Chakraborty et al. observed that one can iterate over all the cubes of the input formula, substitute each cube into the set of XOR constraints separately, and employ Gaussian Elimination to enumerate the solutions of the simplified XOR constraints. Note that at no step would one have to enumerate more than solutions. Since Gaussian Elimination is a polynomialtime procedure, can be accomplished in polynomial time as well. Chakraborty et al. did not provide a precise complexity analysis of . We now provide such an analysis. We defer all proofs to the appendix. The following lemma states the time complexity of the routine.
The complexity of when the input formula to is in DNF is . ∎ We can now complete the complexity analysis: {lemma} The complexity of is when the input formula is in DNF. ∎
4 Efficient Hashingbased DNF Counter
We now present three key novel algorithmic innovations that allow us to design hashingbased FPRAS for #DNF with complexity similar to Monte Carlobased stateoftheart techniques. We first introduce a new family of 2universal hash functions that allow us to circumvent the need for expensive Gaussian Elimination. We then discuss the concept of Symbolic Hashing, which allows us to design hash functions over a space different than the assignment space, allowing us to achieve significant reduction in the complexity of search procedure for the number of the cells. Finally, we show that can be replaced by an efficient stochastic estimator. These three techniques allow us to achieve significant reduction in the complexity of hashingbased DNF counter without loss of theoretical guarantees.
4.1 RowEchelon XOR Hash Functions
The complexity analysis presented in Section 3 shows that the expensive Gaussian Elimination contributes significantly to poor time complexity of . Since the need for Gaussian Elimination originates from the usage of , we seek a family of 2universal hash functions that circumvents this need. We now introduce a RowEchelon XOR family of hash functions defined as where is the identity matrix, and are random 0/1 matrix and vector respectively. In particular, we ensure that for every and we have and also . Note that and completely define a hash function from . The following theorem establishes the desired properties of universality for . The proof is deferred to Appendix.
is 2universal. ∎
The naive way of enumerating satisfying assignments for a given , , and is to iterate over all assignments to the free variables in sequence starting from to where . For each assignment to the free variables, the corresponding assignment to the dependent variables can be calculated as , which requires ) time. Can we do better?
We answer the above question positively by iterating over the assignments to the free variables out of sequence. In particular, we iterate using the Gray code sequence for bits. The procedure is outlined in (Algorithm 1). The algorithm takes the hash matrix , an assignment to the free variables , and an assignment to the dependent variables as inputs, and outputs the next freevariable assignment in the Gray sequence and the corresponding assignment to the dependent variables. represents the position of the bit that is changed between and . Thus constructs a satisfying assignment to a RowEchelon XOR hash function in each invocation in time.
4.2 Symbolic Hashing
For DNF formulas, can be exponentially sparse compared to , which is undesirable^{3}^{3}3Number of steps of search procedure increases with sparsity. It is possible, however, to transform to another space and the solution space to such that the ratio is polynomially bounded and . For DNF formulas, the new universe is defined as . Thus, corresponding to each that satisfies cubes in , we have the states in . Next, the solution space is defined as for a fixed ordering of the cubes. The definition of ensures that . This transformation is due to Karp and Luby [17].
The key idea of Symbolic Hashing is to perform 2universal hashing symbolically over the transformed space. In particular, the sampled hash function partitions the space instead of . Therefore, we employ hash functions from over variables instead of variables. Note that the variables of a satisfying assignment to the hash function are now different from the variables to a satisfying assignment of the input formula . We interpret as follows: the last bits of are converted to a number such that . Now corresponds to a partial assignment of variables in that cube. For simplicity, we assume that each cube is of the same width .^{4}^{4}4We can handle nonuniform width cubes by sampling with probability instead of uniformly The remaining bits of are interpreted to be the assignment to the variables not in giving a complete assignment . Thus we get a pair from such that . For a fixed ordering of variables and cubes we see that there is a bijection between and and hence the 2universality guarantee holds over the partitioned space of .
4.3 Stochastic CellCounting
To estimate the number of solutions in a cell, we need to check for every tuple generated using symbolic hash function as described above: if . Such a check would require iteration over cubes for and returning no if for some and yes otherwise. This would result in procedure with complexity.
Our key observation is that a precise count of the number of solutions in a cell is not required and therefore, one can employ a stochastic estimator for the number of solutions in a cell. We proceed as follows: we define the coverage of an assignment as . Note that .
We define a random variable as the number of steps taken to uniformly and independently sample from , a number such that . For a randomly chosen , the probability , which follows the Bernoulli distribution. The random variable is the number of Bernoulli trials for the first success, which follows the geometric distribution. Therefore, , and . The estimator has been previously employed by Karp et al. [18]. Here, we show that it can also be used for Stochastic CellCounting: we define the estimator for the number of solutions in a cell as .
4.4 The Full Algorithm
We now incorporate the above techniques into and call the revised algorithm , which is presented as Algorithm 3. First, note that expression for is twice that for . Then, in line 4, a matrix and vectors and are obtained, which are employed to construct an appropriate hash function and cell during the search procedure of . makes calls to (line 48) and returns median of all the estimates (lines 910) to boost the probability of success to .
We now discuss the subroutine , which is an adaptation of but with significant differences. First, for DNF formulas with cube width , the number of solutions is lower bounded by . Therefore, instead of starting with hash constraint, we can safely start with constraints (lines 34). Thereafter, calls in line 5 to find the right number of constraints. The cell count with constraints is calculated in line 6 and the estimate is returned in line 7.
algorithm constructs the base matrix and base vectors and required for sampling from family. is a random matrix of dimension and is a random upper triangular matrix of dimension with all diagonal elements . In line 3, is constructed as the vertical concatenation .
(algorithm 5) performs a binary search to find the number of constraints at which the cell count falls below . For DNF formula with cube width of , since the number of solutions is bounded between and , we need to perform search for between and . Therefore, binary search can take at most steps to find correct .
Symbolic Hashing is implemented in Algorithm 6 (). In line 2, we obtain a hash function from over variables by calling . We assume access to a procedure in line 10 that returns the position of the bit that is flipped between two consecutive assignments. A satisfying assignment to the hash function is constructed in line 6. is interpreted to generate a pair in line 7 which is checked for satisfiability in line 8. The final cell count is returned in line 12.
In (algorithm 7), we implement the stochastic cell counting procedure. The key idea is to sample cubes uniformly at random from till a cube is found such that (lines 25). The number of cubes sampled divided by total number of cubes is the estimate returned (line 6).
Procedure in is based upon the in [4]. As noted in the analysis of , such a logarithmic search procedure requires that the solution space for a hash function with hash constraints is a subset of the solution space with hash constraints. Furthermore, we want to preserve RowEchelon nature of the resulting hash constraints. To this end, we first construct and as follows:
To seed the construction procedure, in (algorithm 4) we first randomly sample a 0/1 vector of size which is the maximum number of hash constraints possible. We then construct a 0/1 matrix as follows: where matrix is a random 0/1 matrix with rows, and matrix is defined as as follows:

if

if

if
The reason for this definition of is that for DNF counting we have a good lower bound on the number of hash constraints we can start with. The number of rows in corresponds to this lower bound. The definition of ensures that the rows of are linearly independent which results in a monotonically shrinking solution space.
The procedure (algorithm 8) takes , and and a number as input and returns , and cell such that represents a hash function from with constraints and represents a cell. A precondition for is . In lines 1 and 2, the first rows of and first elements of and are selected as , and respectively. The first rows of form the matrix in the definition of and the remaining rows of are the first rows of matrix . Each row from to is used to reduce the preceding rows in lines 5 to 8 so that the only nonzero elements of the first columns are the leading 1s in rows to . Thus ensures that for a given , and , the solution space of , and is a superset of solution space of , and for all .
5 Analysis
In order to prove the correctness of , we first state the following helper lemma. We defer the proofs to the Appendix.
For every and let . For every and we have
∎ The difference in lemma 5 and lemma 1 in [4] is that the probability bounds differ by a factor of 2. We account for this difference by making in twice the value of in . Therefore the rest of the proof of Theorem 5 (below) is exactly the same as the proof of Theorem 4 of [4]. For completeness, we first restate lemmas 2 and 3 from [4] below.
In the following, denotes the event , and and denote the events and respectively. denotes the integer
The following bounds hold:
∎
Let denote the event that returns a pair such that does not lie in the interval .
∎
Let return count . Then . ∎
Theorem 5 follows from lemmas 5, 5 and 5 and noting that boosts the probability of correctness of the count returned by to by using median of calls.
runs in time.^{5}^{5}5We say if ∎
6 Conclusion
Hashingbased techniques have emerged as a promising approach to obtain counting algorithms and tools that scale to large instances while providing strong theoretical guarantees. This has led to an interest in designing hashingbased algorithms for counting problems that are known to be amenable to fully polynomial randomized approximation schemes. The prior hashingbased approach [4] provided FPRAS for DNF but with complexity much worse than stateoftheart techniques. In this work, we introduced (i) Symbolic Hashing, (ii) Stochastic CellCounting, and (iii) a new 2universal family of hash functions, and obtained a hashingbased FPRAS for #DNF with complexity similar to stateoftheart.
Given the recent interest in hashingbased techniques and generality of our contributions, we believe concepts introduced in this paper can lead to design of hashingbased techniques for other classes of constraints. For example, all prior versions of relied on deterministic SAT solvers for exactly counting the solutions in a cell for #CNF. The technique of Stochastic CellCounting opens up the door for the usage of probabilistic SAT solvers for #CNF. Furthermore, a salient feature of the family is the sparsity of its hash functions. In fact, the sparsity increases with the addition of constraints. Sparse hash functions have been shown to be desirable for efficiently solving CNF+XOR constraints [15, 9, 12]. An interesting direction for future work is to test family with CNF formulas.
Acknowledgements
The authors thank Jeffrey Dudek, Supratik Chakraborty and Dror Fried for valuable discussions. Work supported in part by NSF grants CCF1319459 and IIS1527668, by NSF Expeditions in Computing project "ExCAPE: Expeditions in Computer Augmented Program Engineering", and by an IBM Graduate Fellowship. Kuldeep S. Meel is supported by the IBM PhD Fellowship and the Lodieska Stockbridge Vaughn Fellowship.
References
 [1] F. Bacchus, S. Dalmao, and T. Pitassi. Algorithms and complexity results for #SAT and Bayesian inference. In Proc. of FOCS, pages 340–351, 2003. URL: http://dl.acm.org/citation.cfm?id=946243.946291.
 [2] J Lawrence Carter and Mark N Wegman. Universal classes of hash functions. In Proceedings of the ninth annual ACM symposium on Theory of computing, pages 106–112. ACM, 1977.
 [3] S. Chakraborty, K. S. Meel, and M. Y. Vardi. A scalable approximate model counter. In Proc. of CP, pages 200–216, 2013.
 [4] S. Chakraborty, K. S. Meel, and M. Y. Vardi. Algorithmic improvements in approximate counting for probabilistic inference: From linear to logarithmic SAT calls. In Proc. of IJCAI, 2016.
 [5] Paul Dagum, Richard Karp, Michael Luby, and Sheldon Ross. An optimal algorithm for monte carlo estimation. SIAM Journal on computing, 29(5):1484–1496, 2000.
 [6] Nilesh Dalvi and Dan Suciu. Efficient query evaluation on probabilistic databases. The VLDB Journal—The International Journal on Very Large Data Bases, 16(4):523–544, 2007.
 [7] C. Domshlak and J. Hoffmann. Probabilistic planning via heuristic forward search and weighted model counting. Journal of Artificial Intelligence Research, 30(1):565–620, 2007.
 [8] Leonardo DuenasOsorio, Kuldeep S Meel, Roger Paredes, and Moshe Y Vardi. Countingbased reliability estimation for powertransmission grids. In AAAI, pages 4488–4494, 2017.
 [9] S. Ermon, C. P. Gomes, A. Sabharwal, and B. Selman. Lowdensity parity constraints for hashingbased discrete integration. In Proc. of ICML, pages 271–279, 2014.
 [10] Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, and Bart Selman. Taming the curse of dimensionality: Discrete integration by hashing and optimization. In Proc. of ICML, pages 334–342, 2013.
 [11] V. Gogate and R. Dechter. Approximate counting by sampling the backtrackfree search space. In Proc. of the AAAI, volume 22, page 198, 2007.
 [12] C. P. Gomes, J. Hoffmann, A. Sabharwal, and B. Selman. Short XORs for Model Counting; From Theory to Practice. In SAT, pages 100–106, 2007.
 [13] C. P. Gomes, A. Sabharwal, and B. Selman. Model counting: A new strategy for obtaining good bounds. In Proc. of AAAI, volume 21, pages 54–61, 2006.
 [14] Frank Gray. Pulse code communication, March 17 1953. US Patent 2,632,058.
 [15] Alexander Ivrii, Sharad Malik, Kuldeep S. Meel, and Moshe Y. Vardi. On computing minimal independent support and its applications to sampling and counting. Constraints, pages 1–18, 2015. URL: http://dx.doi.org/10.1007/s106010159204z, doi:10.1007/s106010159204z.
 [16] M.R. Jerrum, L.G. Valiant, and V.V. Vazirani. Random generation of combinatorial structures from a uniform distribution. Theoretical Computer Science, 43(23):169–188, 1986. URL: http://portal.acm.org/citation.cfm?id=11534.11537.
 [17] R.M. Karp and M. Luby. Montecarlo algorithms for enumeration and reliability problems. Proc. of FOCS, 1983.
 [18] R.M. Karp, M. Luby, and N. Madras. MonteCarlo approximation algorithms for enumeration problems. Journal of Algorithms, 10(3):429–448, 1989.
 [19] Donald E Knuth. Generating all ntuples. The Art of Computer Programming, 4, 2004.
 [20] George Markowsky, J Lawrence Carter, and Mark Wegman. Analysis of a universal class of hash functions. Mathematical Foundations of Computer Science 1978, pages 345–354, 1978.
 [21] James D Park and Adnan Darwiche. Complexity results and approximation strategies for map explanations. Journal of Artificial Intelligence Research, pages 101–133, 2006.
 [22] D. Roth. On the hardness of approximate reasoning. Artificial Intelligence, 82(1):273–302, 1996. URL: http://dx.doi.org/10.1016/00043702(94)000921, doi:10.1016/00043702(94)000921.
 [23] T. Sang, F. Bacchus, P. Beame, H. Kautz, and T. Pitassi. Combining component caching and clause learning for effective model counting. In Proc. of SAT, 2004.
 [24] T. Sang, P. Beame, and H. Kautz. Performing bayesian inference by weighted model counting. In Prof. of AAAI, pages 475–481, 2005.
 [25] L. Stockmeyer. The complexity of approximate counting. In Proc. of STOC, pages 118–126, 1983.
 [26] Gilbert Strang. Introduction to linear algebra, volume 3. WellesleyCambridge Press Wellesley, MA, 1993.
 [27] S. Toda. On the computational power of PP and (+)P. In Proc. of FOCS, pages 514–519. IEEE, 1989.
 [28] L. Trevisan. Lecture notes on computational complexity. Notes written in Fall, 2002. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.9877&rep=rep1&type=pdf.
 [29] L.G. Valiant. The complexity of enumeration and reliability problems. SIAM Journal on Computing, 8(3):410–421, 1979.
Appendix
Proof of Lemma 3.1
Proof.
When the input formula to is in DNF, is invoked with a formula of the form where is a conjunction of XOR constraints. For each cube , proceeds by performing Gaussian Elimination on . Since the number of XOR constraints can be , Gaussian elimination can take time resulting in a cumulative complexity of for all cubes.
At most solutions to each may have to be enumerated and each enumeration requires time. Therefore the complexity of enumeration is . Thus the runs in time in the worst case. ∎∎
Proof of Lemma 3.1
Proof.
Proof of Theorem 4.1:
Proof.
Let be a random hash function from with as its matrix. Let be any two assignments such that . To prove 2universality of , we need to show that for all and :