Combinatorial decomposition approaches for efficient counting and random generation FPTASes
Given a combinatorial decomposition for a counting problem, we resort to the simple scheme of approximating large numbers by floating-point representations in order to obtain efficient Fully Polynomial Time Approximation Schemes (FPTASes) for it. The number of bits employed for the exponent and the mantissa will depend on the error parameter and on the characteristics of the problem. Accordingly, we propose the first FPTASes with relative error for counting and generating uniformly at random a labeled DAG with a given number of vertices. This is accomplished starting from a classical recurrence for counting DAGs, whose values we approximate by floating-point numbers.
After extending these results to other families of DAGs, we show how the same approach works also with problems where we are given a compact representation of a combinatorial ensemble and we are asked to count and sample elements from it. We employ here the floating-point approximation method to transform the classic pseudo-polynomial algorithm for counting 0/1 Knapsack solutions into a very simple FPTAS with relative error. Its complexity improves upon the recent result (Štefankovič et al., SIAM J. Comput., 2012), and, when , also upon the best-known randomized algorithm (Dyer, STOC, 2003). To show the versatility of this technique, we also apply it to a recent generalization of the problem of counting 0/1 Knapsack solutions in an arc-weighted DAG, obtaining a faster and simpler FPTAS than the existing one.
In this paper we consider two main types of counting problems. In the first (combinatorial family), the input consists of a single integer and we are interested in counting/generating the objects of the th slice of a family parametrized by , such as all labelled trees on vertices or all well-formed formulas on parentheses; in this paper we tackle labeled directed acyclic graphs (DAGs) on vertices, and two DAG subclasses. In the second (combinatorial ensemble), we are given a structure and we want to count and sample all of its substructures with a given property, such as the spanning trees or the perfect matchings of a graph given in input; we tackle here the problem of counting 0/1 Knapsack solutions, and a generalization of this problem to a DAG.
A general result by Jerrum, Valiant and Vazirani  is that the problem of exact random uniform generation of ‘efficiently verifiable’ combinatorial structures is reducible to the counting problem. Since in many cases the counting problem is either hard, or simply expensive in practice,  also shows that for self-reducible problems almost uniform random generation and randomized approximate counting are inter-reducible. This fact has determined that randomized approximate approaches, in particular based on Markov chains, have attracted most attention. Indeed, in spite of the fact that these approaches are general, in the sense that they do not rely on problem specific combinatorial decompositions, they allow for faster algorithms when approximate solutions are good enough.
The intended message of the present work is that the compromise towards approximate solutions can also take place in the context of methods based on combinatorial decompositions. In order to facilitate this, we build up a minimalistic layer of floating-point arithmetic suitably tailored for this purpose. Our idea dates back to Denise and Zimmermann , which considered floating-point arithmetic for uniform random generation of decomposable structures. A ‘decomposable structure’ is a combinatorial structure definable in terms of the ‘standard constructions’ of disjoint union, Cartesian product, Sequence, Cycle, and Set (see e.g.  for details). These include, for example, unordered or ordered trees, permutations, set partitions, but do not include more complex objects such as DAGs, nor substructures of a structure given in input, such as solutions of a given 0/1 Knapsack instance. Moreover, even though  relates the relative error with the length of the mantissa, their results are not stated in terms of FPTASes. An FPTAS is a deterministic algorithm that estimates the exact solution within relative error , in time polynomial in the input size and in .
Given an error , we represent large integers as floating-point numbers having an exact exponent, so that no overflow can occur, and a mantissa whose length depends on and is only as long as to guarantee a relative error.
We show that floating-point arithmetic can be added as a technical layer on top of any suitable combinatorial decomposition of the problem at hand, obtaining efficient state-of-the-art, both deterministic FPTASes for counting, and practical random generation algorithms with explicit error probability bounds. Some of our FPTASes are actually linear in , which, in the case of counting problems, means a linear dependence on the length of the output.
Until now, for all the problems considered in this paper, Monte Carlo algorithms had the best running times, with a performance guarantee either proven (like for counting 0/1 Knapsack solutions ) or just generally accepted (like for DAGs generation ). Recently, other authors have proposed deterministic algorithms for approximate counting [26, 10]. However, our deterministic algorithms are the first ones to reach and even improve upon the running times of the Monte Carlo algorithms. Considered also that we get rid of the error probability , it is quite remarkable that we close the gap between deterministic and randomized algorithms.
In the same way that Markov chains offer a fascinating layer of reusable theory, our approach is also unifying, with the required math for bounding the run-time in terms of embodied in the technical floating-point arithmetic layer. Even though its level of generality is not comparable, it still offers a conceptual tool that can guide and inspire the design of new algorithms. In this new scenario, the length of the mantissa becomes a resource, and minimizing its consumption leads one to reduce the number of subsequent approximation phases in the processing of the data flow. This view indeed supported us in gaining an extra factor in Thm. 5. Moreover, the algorithms inspired by this framework do not require the difficult ad-hoc analysis of rapidly mixing properties of Markov chains, necessary for a conclusive word on the actual computational complexity of a given problem.
Based on these facts, we hope to see a renewal of interest on methods grounded on the combinatorial decomposition of the problem at hand both in practical and theoretical studies on counting and random generation, where the problems allow.
1.1 Counting and random generation for a combinatorial family
To illustrate the floating-point approximation scheme for a combinatorial family, we focus on DAGs. They constitute a basic class of graphs, with applications in various fields. As in the case of other combinatorial objects, the problem of generating uniformly at random (u.a.r., for short) a DAG with labeled vertices was first tackled with a Markov Chain algorithm [12, 13]. The main issue behind such a randomized approach lies in the difficulty of proving the rapidly mixing property. This was the case here for DAGs, as such a proof never appeared. Steinsky  proposed a nice generalization of Prüfer’s encoding of labeled trees to labeled DAGs, and put forth ranking and unranking algorithms. These led to a deterministic random generation algorithm working in time and space bits, where is the slowdown factor of multiplying two -bit numbers.
Our solution is based on the decomposition of DAGs by sources, initially proposed by Robinson  to obtain a counting recurrence of labeled DAGs with vertices and a given number of sources. We exploit this decomposition by generating a labeled DAG recursively, at each step generating its sources (and their out-going arcs) by using the values of the counting recurrence as probability distribution. To further illustrate this method, in Appendix A we consider two recently studied subclasses of DAGs, essential DAGs (essDAGs) , and extensional DAGs (extDAGs) .
A labeled DAG, an essential DAG, or extensional DAG, with vertices can be generated u.a.r. in time , provided a table of size bits, computable in time , is available.
We then show that, instead of storing the values of the counting recurrence as exact numbers on bits, we can store approximate floating-point numbers with bits for the exponent, and bits for the mantissa. This leads to the first deterministic FPTASes for counting and random generation, as stated in the following theorems, where , , denote the number of labeled DAGs, essential DAGs, and extensional DAGs, respectively, on vertices.
For any , and for every , we can compute an -bit number such that , , or , in time .
For any , and for every , we can generate at random a labeled DAG, essential DAG, or extensional DAG, on vertices with probability such that , , or . This can be done in time , provided a table of size bits, computable in time , is available.
1.2 Counting a combinatorial ensemble
To illustrate the floating-point approximation scheme for a combinatorial ensemble, we choose the well-known problem of counting 0/1 Knapsack solution. We are given a set of nonnegative integer weights and an integer and are asked how many subsets of elements of sum up to at most . Since this problem is #P-complete, research has focused on approximation algorithms. The first one was a randomized subexponential time algorithm  based on near-uniform sampling of feasible solutions by a random walk. A rapidly mixing Markov chain appeared in , which provided the first Fully Polynomial Time Randomized Approximation Scheme (FPRAS), for this problem, that had remained open for a some time. An FPRAS with a complexity was given in , by combining dynamic programming and rejection sampling. This complexity bound can be improved to by a more sophisticated approach using randomized rounding . Recently, [26, 10] gave the first deterministic FPTAS for this problem, running in time . A weaker result, namely a version of the algorithm of , but in which the number of arithmetic operations depends on , appeared in  and in the combined extended abstract .
The solution in  is based on a function defined as the smallest capacity such that there exist at least solutions to the 0/1 Knapsack problem with weights and capacity . The second parameter of is then approximated, and is computed by a dynamic programming algorithm.
We start from the classic pseudo-polynomial dynamic programming algorithm obtained from the recurrence
where is the number of 0/1 Knapsack solutions that use a subset of the items , and their weights sum up to at most . We approximate the values of using floating-point numbers, which leads to a more direct FPTAS, with a much simpler proof, and an easily implementable algorithm. Making the same assumption as  that additions on -bit numbers take unit time, we improve [26, 10] as follows:
For every , and every , for an input , to the 0/1 Knapsack counting problem, we can compute a floating-point number of bits, which satisfies , in time , assuming unit cost additions and comparisons on numbers with bits.
Note that if , our deterministic FPTAS also improves both FPRASes in .
Our reasoning is along the following lines. Since the numbers of solutions can be at most , and the values of the dynamic programming are obtained by sequences of successive additions, we can approximate them using floating-point numbers with bits for the exponent and bits for the mantissa. In order to obtain the approximation factor, we will show that the relative error of each approximation of is , for any . To keep the table small, we exploit the fact that the number of different entries in each row of the approximated table is at most .
Recently, the problem of counting 0/1 Knapsack solutions has been extended to a DAG, as follows . Given a DAG with nonnegative arc weights, two vertices and , and a capacity , count how many paths exist from to of total weight at most ; this problem is relevant for various applications in biological sequence analysis, see the references in . This is clearly a generalization of counting 0/1 Knapsack solutions, since given an instance it suffices to construct the DAG having as vertex set, , , and for each , there are two parallel arcs from to , with weights 0 and , respectively.
In , the technique of  is extended to this problem, and an FPTAS running in time is obtained (inaccurately, the factor is missing from their stated complexity bound). Just as we do for the classical 0/1 Knapsack problem, we start from the basic pseudo-polynomial dynamic programming algorithm extended to a DAG, whose values we approximate using floating-point numbers. We show that we can organize the computation in sequences of successive additions, so that we need floating-point numbers with only bits for the mantissa, and bits for the exponent. This analogously leads to a faster and simpler FPTAS.
For every , and every , for an input DAG on vertices and arcs, nonnegative arc weights, and a capacity , we can compute an approximation of the number of -paths, in time , assuming unit cost additions and comparisons on numbers with bits.
2 Approximation by floating-point numbers
Throughout this paper, we assume that the problem instances consist of objects (DAGs with vertices, 0/1 Knapsack instances with objects). Let be such that the maximum numerical value of a particular counting problem is (that is, it can be represented with bits). Any number can be written as
where , , and , for . Under floating-point arithmetic terminology, is called the exponent of , and the binary string is called its mantissa.
We will approximate as a floating-point number which has bits dedicated to store its exponent exactly, but only bits dedicated to store the first bits of its mantissa; that is, we approximate by the number
We will often drop the subscript when this will be clear from the context. For sure, we will choose , since the contrary cannot help.
For every , it holds that
Let and be two floating-point numbers with bits for the exponent and bits for the mantissa. We denote the sum by , and the product by . We assume that we can compute with a bit complexity of ; if additions on -bit numbers take unit time, then we assume we can compute with a word complexity of . Let us denote by the slowdown factor of a multiplication algorithm on -bit numbers; for example, the Schönhage-Strassen algorithm  multiplies two -bit numbers in time , and we get . Accordingly, we assume that we can compute with bit operations.
If are such that , and are two floating-point numbers with bits for the exponent and bits for the mantissa such that
for some integers , then by (1) the following inequalities hold
For each particular problem, we will choose as a function of and of the error factor , . For the problem of random generation of DAGs we have , and we take ; in the case of counting 0/1 Knapsack solutions and , while for its extension on a DAG, and .
3 Random generation of DAGs
In Sec. 3.1 we present the well-known decomposition of labeled DAGs by sources , and turn it into a deterministic random generation algorithm. In Sec. 3.2, we show how to approximate the numerical values of the counting recurrence, and argue that the resulting random generation algorithm is an FPTAS of lower complexity.
3.1 Exact u.a.r. generation of DAGs by sources
For , let denote the number of labeled DAGs with vertices, out of which precisely are sources. Then is the number of labeled DAGs on vertices. In  a simple decomposition of DAGs by sources delivers the following counting recurrence:
where , for all . Indeed, there are ways to choose the sources, and by removing the sources we obtain a DAG with vertices and sources, . Each of these sources must have a non-empty in-neighborhood included in the set of removed vertices, while the other vertices can have arbitrary in-neighbors among these vertices.
In order to generate u.a.r. a DAG having the set as vertex set, recurrence (4) suggests the following recursive algorithm. Choose the number of its sources with probability . Then, choose u.a.r. the sources , and call the recursive algorithm for . Finally, connect with the graph returned by the recursive call, as indicated by the proof of (4); see Algorithm 1.
In order to choose a number with probability , we can choose u.a.r. , and then take as the smallest integer such that . For every , we can store a patricia trie containing values , for all ; is found by a successor query in the patricia trie.
The asymptotic behavior of is where and [2, 3]. Therefore, we need bits to store each . In order to compute numbers , we assume to have access to pre-computed tables storing numerical values of binomial coefficients, and of all ; number can be computed by setting one bit to 1. Each number can be then computed with additions and multiplications on bits. Therefore, computing the entire table has bit complexity .
For every , the th patricia trie can be constructed with bit operations, uses space bits, and supports successor queries in time ; these are standard considerations in data structures. Therefore, choosing takes time . The second part of the algorithm takes overall time, since each of the arcs of a DAG is introduced at most once. Therefore, we obtain Thm. 1.
3.2 An FPTAS for generating labeled DAGs u.a.r.
Let , , be fixed. Instead of using bits for storing each entry in the table , we use floating-point representations with bits for the exponent and bits for the mantissa.
For each , we approximate by , recursively computed by floating-point additions and multiplications, as:
where , for all . In order to compute numbers , we assume to have access to tables now storing floating-point approximations with bits for the exponent and bits for the mantissa, with a precision as in (1), of binomial coefficients and of numbers . These floating-point numbers can be obtained from the tables storing their exact values, assumed available in the exact case, by trivially setting the exponent to be the length of the exact number, and by filling in its mantissa by taking the first bits. Number can be represented exactly with the floating-point representation by setting the exponent to , the first bit of the mantissa to 1, and the remaining bits to 0.
Each number can be computed with floating-point additions and multiplications on -bit numbers; thus, the entire table can be computed in time .
The following lemma characterizes the approximation quality of the numbers .
For any and any , it holds that
We prove the first inequality; will follow analogously. We reason by induction on , the claim being clear for . For any , it holds that
since , by (3) it holds that , and from the inductive hypothesis we have .
Since the sum goes over from 1 to , we have to do floating-point additions, therefore, by (2),
We assumed that , therefore this implies, by (3), that
Since and , we have , which proves the claim, because .
Notice that the table of numbers depends only on and . We propose to run Algorithm 1 on the table . We use the same scheme as before for choosing , which now takes time . This is our FPTAS for approximate random generation.
Let be a fixed DAG with vertices and assume that , , is the set of sources of , , , is the set of sources of , and so on, until say , with . The probability of generating u.a.r., which is , can also be expressed, as a consequence of Algorithm 1, as
The probability that randomGenerateDAG is
Therefore, by Lemma 1, since , holds. If we choose , it holds that
By standard techniques, for all natural numbers and all , the following hold:
4 Counting 0/1 Knapsack solutions
The classic pseudo-polynomial algorithm for counting 0/1 Knapsack solutions defines as the number of Knapsack solutions that use a subset of the items , of weight at most , and computes these values by dynamic programming, using the recurrence
Indeed, we either use only a subset of items from whose weights sum up to , or use item of weight and a subset of items from whose weights sum up to . This DP algorithm executes additions on -bit numbers and its complexity is . When , this is , whence will be assumed in the following. We will assume, like in , that additions and comparisons on numbers with bits have unit cost, which implies the same on -bit numbers.
We also use relation (7) to count, but our numbers, for any , are approximate floating-point numbers with bits for the exponent, and bits for the mantissa (we can assume for simplicity that a solution using all objects has cost greater than , so that for all , ). By the above assumption, we have that additions and comparisons of these floating-point numbers on bits take time .
For every we keep a list, , whose entries are pairs of the form , where is a capacity in and is an approximate floating-point number of solutions. We will refer to the set of first components of the pairs in as the capacities in .
Having , for every we define , where the maximum of an empty set is taken to be .
The first list, , consists of the single pair . After this initialization, while computing from , we maintain the following two invariant properties:
is strictly increasing on both components;
, for every .
Note that Property (I) implies that the length of is at most the total number of floating-point numbers that can be represented with bits, that is .
We obtain by first building the bimonotonic list which, for every capacity in , contains the following two pairs:
It may turn out that contains distinct pairs having the same second component. Therefore, in order to assure Property (I), we obtain by pruning away from those pairs when another pair with is present. We summarize this procedure as Algorithm 2. Lemma 2 below shows that we can efficiently construct ; the idea of the proof is to do two linear scans of , each with two pointers, and it is in Appendix B.
We can compute and from in time .
Property (I) holds for , that is, for every and every , holds.
The claim is clear for . For an arbitrary capacity , let in be such that . From the definition of , we get ; from the fact that the pairs in are of the form (8), we have
Since the capacities in are a subset of the capacities in , and the fact that we have pruned the pairs in by keeping the smallest capacity for every approximate number of solutions corresponding to that capacity, it holds that . Moreover, observe that there is no capacity in such that . Indeed, for assuming the contrary, would be a capacity in , by (8). Since we have chosen as the largest capacity in smaller than , and holds, this implies that was pruned when passing from to ; thus, the two pairs of having and as first components have equal second components. By (9) and the bimonotonicity of , this entails that also the two pairs of having and as first components must have equal second components. This contradicts the fact that satisfies Property (I).
From Lemma 3, the fact that Property (I) holds, and (6), we finally obtain Thm. 4. Since when , our deterministic FPTAS also runs faster than the Monte Carlo FPRASes in , which currently held the record on the whole range, as soon as .
Onwards, we briefly sketch the details on applying this method to the Knapsack problem on a DAG (the full explanation is available in Appendix C.). We can assume that all vertices of the DAG (with vertices and arcs) are reachable from , and all vertices reach . For simplicity, we transform into an equivalent DAG in which every vertex has at most two in-coming arcs, and has vertices and arcs, and the maximum path length (i.e., number of arcs in the path) is . Say that has vertices and let be a topological ordering of them. We now denote by the number of paths that end in and their total weight is at most . If for every node , its in-degree is , its in-neighborhood is , and the weights of the arcs entering are , respectively, relation (7) generalizes to:
The solution is obtained as . As before, we use (11) to count, keeping at each step approximate floating-point numbers. These numbers still have bits for the exponent, but, since the maximum path length in is , the length of their mantissa will be bits. Additions and comparisons of these floating-point numbers still take the same time as before, namely .
As before, for every , we keep a list, , of pairs [capacity, approximate number of solutions], now of length at most . Analogously, consists of the single pair , and while computing from lists , or and (doable now in time ), we maintain the following two invariants, where denotes the length of the longest path from to :
is strictly increasing on both components;
, for every .
From these considerations, Thm. 5 immediately follows.
This work was partially supported by the Academy of Finland under grant 250345 (CoECGR), and by the European Science Foundation, activity “Games for Design and Verification”. We thank Djamal Belazzougui and Daniel Valenzuela for discussions on data structures for large numbers, and Stephan Wagner for remarks on decomposable structures.
-  S. A. Andersson, D. Madigan, and M. D. Perlman, A characterization of Markov equivalence classes for acyclic digraphs, Ann. Statist, (1997), pp. 502–541.
-  E. A. Bender, L. B. Richmond, R. W. Robinson, and N. C. Wormald, The asymptotic number of acyclic digraphs, I, Combinatorica, 6 (1986), pp. 15–22.
-  E. A. Bender and R. W. Robinson, The asymptotic number of acyclic digraphs, II, J. Comb. Theory, 44 (1988), pp. 363–369.
-  A. Denise and P. Zimmermann, Uniform Random Generation of Decomposable Structures Using Floating-Point Arithmetic, Theor. Comput. Sci., 218 (1999), pp. 233–248.
-  M. E. Dyer, Approximate counting by dynamic programming, in STOC, L. L. Larmore and M. X. Goemans, eds., ACM, 2003, pp. 693–699.
-  M. E. Dyer, A. M. Frieze, R. Kannan, A. Kapoor, L. Perkovic, and U. V. Vazirani, A Mildly Exponential Time Algorithm for Approximating the Number of Solutions to a Multidimensional Knapsack Problem, Combinatorics, Probability & Computing, 2 (1993), pp. 271–284.
-  P. Flajolet and R. Sedgewick, Analytic Combinatorics, Cambridge University Press, 2009.
-  S. B. Gillispie and M. D. Perlman, Enumerating Markov Equivalence Classes of Acyclic Digraph Models, in Proc. of the Conf. on Uncertainty In Artificial Intelligence, Morgan Kaufmann, 2001, pp. 171–177.
-  P. Gopalan, A. Klivans, and R. Meka, Polynomial-Time Approximation Schemes for Knapsack and Related Counting Problems using Branching Programs, CoRR, abs/1008.3187 (2010).
-  P. Gopalan, A. Klivans, R. Meka, D. Stefankovic, S. Vempala, and E. Vigoda, An FPTAS for #Knapsack and Related Counting Problems, in FOCS, R. Ostrovsky, ed., IEEE, 2011, pp. 817–826.
-  M. Jerrum, L. G. Valiant, and V. V. Vazirani, Random generation of combinatorial structures from a uniform distribution, Theor. Comput. Sci., 43 (1986), pp. 169–188.
-  G. Melançon, I. Dutour, and M. Bousquet-Mélou, Random generation of directed acyclic graphs, Electronic Notes in Discrete Mathematics, 10 (2001), pp. 202–207.
-  G. Melançon and F. Philippe, Generating connected acyclic digraphs uniformly at random, Inf. Process. Lett., 90 (2004), pp. 209–213.
-  M. Mihalák, R. Šrámek, and P. Widmayer, Counting approximately-shortest paths in directed acyclic graphs, in 11th Workshop on Approximation and Online Algorithms – WAOA 2013, 2013. In press (a prelimiary version is available at http://arxiv.org/abs/1304.6707v2).
-  M. Milanič and A. I. Tomescu, Set graphs. I. Hereditarily finite sets and extensional acyclic orientations, Discrete Applied Mathematics, 161 (2013), pp. 677–690.
-  B. Morris and A. Sinclair, Random walks on truncated cubes and sampling 0-1 knapsack solutions, SIAM J. Comput., 34 (2004), pp. 195–226.
-  R. Peddicord, The number of full sets with elements, Proc. Amer. Math. Soc, 13 (1962), pp. 825–828.
-  J. M. Peña, Approximate Counting of Graphical Models Via MCMC, Journal of Machine Learning Research - Proceedings Track, 2 (2007), pp. 355–362.
-  A. Policriti and A. I. Tomescu, Counting extensional acyclic digraphs, Information Processing Letters, 111 (2011), pp. 787–791.
-  R. Rizzi and A. I. Tomescu, Ranking, unranking and random generation of extensional acyclic digraphs, Inf. Process. Lett., 113 (2013), pp. 183–187.
-  R. W. Robinson, Counting labeled acyclic digraphs, in New directions in the Theory of Graphs, F. Harary, ed., Academic Press, NY, 1973, pp. 239–273.
-  A. Schönhage and V. Strassen, Schnelle multiplikation großer zahlen, Computing, 7 (1971), pp. 281–292.
-  B. Steinsky, Efficient coding of labeled directed acyclic graphs, Soft Comput., 7 (2003), pp. 350–356.
-  , Enumeration of labelled chain graphs and labelled essential directed acyclic graphs, Discrete Mathematics, 270 (2003), pp. 266–277.
-  , Asymptotic behaviour of the number of labelled essential acyclic digraphs and labelled chain graphs, Graphs and Combinatorics, 20 (2004), pp. 399–411.
-  D. Štefankovič, S. Vempala, and E. Vigoda, A deterministic polynomial-time approximation scheme for counting knapsack solutions, SIAM J. Comput., 41 (2012), pp. 356–366.
-  S. Wagner, Asymptotic enumeration of extensional acyclic digraphs, in ANALCO, C. Martínez and H.-K. Hwang, eds., SIAM, 2012, pp. 1–8.
-  S. Wagner, Asymptotic enumeration of extensional acyclic digraphs, Algorithmica, (2012), pp. 1–19. DOI: 10.1007/s00453-012-9725-4.
Appendix A Random generation of other DAG subclasses
a.1 Essential DAGs
Essential DAGs are used to represent the structure of Bayesian networks [8, 18]. They were counted in  by inclusion-exclusion, and their asymptotic behavior was studied in . We give a new counting recurrence for essDAGs, which leads to the first algorithm for generating u.a.r. a labeled essDAG with vertices; this is useful for learning the structure of a Bayesian network from data [8, 18]. This can be turned into an FPTAS, with the same complexity and approximation bounds as in the case of DAGs.
Essential DAGs (essDAGs) are those DAGs with the property that for every edge , the set of in-neighbors of is different from the set of in-neighbors of , minus vertex ; that is, for every it holds that .
Define the depth of a vertex in a DAG as the length of any longest directed path from a source of to . Note that a vertex of maximum depth in must be a sink of (but the converse does not hold). Let us denote by the number of labeled essDAGs with vertices, and in which there are vertices of maximum depth.
For any and any