Combinatorial decomposition approaches for efficient counting and random generation FPTASes

Combinatorial decomposition approaches for efficient counting and random generation FPTASes

Romeo Rizzi Department of Computer Science, University of Verona, Italy
Helsinki Institute for Information Technology HIIT,
Department of Computer Science, University of Helsinki, Finland
romeo.rizzi@univr.it, tomescu@cs.helsinki.fi
Alexandru I. Tomescu Department of Computer Science, University of Verona, Italy
Helsinki Institute for Information Technology HIIT,
Department of Computer Science, University of Helsinki, Finland
romeo.rizzi@univr.it, tomescu@cs.helsinki.fi
Abstract

Given a combinatorial decomposition for a counting problem, we resort to the simple scheme of approximating large numbers by floating-point representations in order to obtain efficient Fully Polynomial Time Approximation Schemes (FPTASes) for it. The number of bits employed for the exponent and the mantissa will depend on the error parameter and on the characteristics of the problem. Accordingly, we propose the first FPTASes with relative error for counting and generating uniformly at random a labeled DAG with a given number of vertices. This is accomplished starting from a classical recurrence for counting DAGs, whose values we approximate by floating-point numbers.

After extending these results to other families of DAGs, we show how the same approach works also with problems where we are given a compact representation of a combinatorial ensemble and we are asked to count and sample elements from it. We employ here the floating-point approximation method to transform the classic pseudo-polynomial algorithm for counting 0/1 Knapsack solutions into a very simple FPTAS with relative error. Its complexity improves upon the recent result (Štefankovič et al., SIAM J. Comput., 2012), and, when , also upon the best-known randomized algorithm (Dyer, STOC, 2003). To show the versatility of this technique, we also apply it to a recent generalization of the problem of counting 0/1 Knapsack solutions in an arc-weighted DAG, obtaining a faster and simpler FPTAS than the existing one.

1 Introduction

In this paper we consider two main types of counting problems. In the first (combinatorial family), the input consists of a single integer and we are interested in counting/generating the objects of the th slice of a family parametrized by , such as all labelled trees on vertices or all well-formed formulas on parentheses; in this paper we tackle labeled directed acyclic graphs (DAGs) on vertices, and two DAG subclasses. In the second (combinatorial ensemble), we are given a structure and we want to count and sample all of its substructures with a given property, such as the spanning trees or the perfect matchings of a graph given in input; we tackle here the problem of counting 0/1 Knapsack solutions, and a generalization of this problem to a DAG.

A general result by Jerrum, Valiant and Vazirani [11] is that the problem of exact random uniform generation of ‘efficiently verifiable’ combinatorial structures is reducible to the counting problem. Since in many cases the counting problem is either hard, or simply expensive in practice, [11] also shows that for self-reducible problems almost uniform random generation and randomized approximate counting are inter-reducible. This fact has determined that randomized approximate approaches, in particular based on Markov chains, have attracted most attention. Indeed, in spite of the fact that these approaches are general, in the sense that they do not rely on problem specific combinatorial decompositions, they allow for faster algorithms when approximate solutions are good enough.

The intended message of the present work is that the compromise towards approximate solutions can also take place in the context of methods based on combinatorial decompositions. In order to facilitate this, we build up a minimalistic layer of floating-point arithmetic suitably tailored for this purpose. Our idea dates back to Denise and Zimmermann [4], which considered floating-point arithmetic for uniform random generation of decomposable structures. A ‘decomposable structure’ is a combinatorial structure definable in terms of the ‘standard constructions’ of disjoint union, Cartesian product, Sequence, Cycle, and Set (see e.g. [7] for details). These include, for example, unordered or ordered trees, permutations, set partitions, but do not include more complex objects such as DAGs, nor substructures of a structure given in input, such as solutions of a given 0/1 Knapsack instance. Moreover, even though [4] relates the relative error with the length of the mantissa, their results are not stated in terms of FPTASes. An FPTAS is a deterministic algorithm that estimates the exact solution within relative error , in time polynomial in the input size and in .

Given an error , we represent large integers as floating-point numbers having an exact exponent, so that no overflow can occur, and a mantissa whose length depends on and is only as long as to guarantee a relative error.

We show that floating-point arithmetic can be added as a technical layer on top of any suitable combinatorial decomposition of the problem at hand, obtaining efficient state-of-the-art, both deterministic FPTASes for counting, and practical random generation algorithms with explicit error probability bounds. Some of our FPTASes are actually linear in , which, in the case of counting problems, means a linear dependence on the length of the output.

Until now, for all the problems considered in this paper, Monte Carlo algorithms had the best running times, with a performance guarantee either proven (like for counting 0/1 Knapsack solutions [5]) or just generally accepted (like for DAGs generation [12]). Recently, other authors have proposed deterministic algorithms for approximate counting [26, 10]. However, our deterministic algorithms are the first ones to reach and even improve upon the running times of the Monte Carlo algorithms. Considered also that we get rid of the error probability , it is quite remarkable that we close the gap between deterministic and randomized algorithms.

In the same way that Markov chains offer a fascinating layer of reusable theory, our approach is also unifying, with the required math for bounding the run-time in terms of embodied in the technical floating-point arithmetic layer. Even though its level of generality is not comparable, it still offers a conceptual tool that can guide and inspire the design of new algorithms. In this new scenario, the length of the mantissa becomes a resource, and minimizing its consumption leads one to reduce the number of subsequent approximation phases in the processing of the data flow. This view indeed supported us in gaining an extra factor in Thm. 5. Moreover, the algorithms inspired by this framework do not require the difficult ad-hoc analysis of rapidly mixing properties of Markov chains, necessary for a conclusive word on the actual computational complexity of a given problem.

Based on these facts, we hope to see a renewal of interest on methods grounded on the combinatorial decomposition of the problem at hand both in practical and theoretical studies on counting and random generation, where the problems allow.

1.1 Counting and random generation for a combinatorial family

To illustrate the floating-point approximation scheme for a combinatorial family, we focus on DAGs. They constitute a basic class of graphs, with applications in various fields. As in the case of other combinatorial objects, the problem of generating uniformly at random (u.a.r., for short) a DAG with labeled vertices was first tackled with a Markov Chain algorithm [12, 13]. The main issue behind such a randomized approach lies in the difficulty of proving the rapidly mixing property. This was the case here for DAGs, as such a proof never appeared. Steinsky [23] proposed a nice generalization of Prüfer’s encoding of labeled trees to labeled DAGs, and put forth ranking and unranking algorithms. These led to a deterministic random generation algorithm working in time and space bits, where is the slowdown factor of multiplying two -bit numbers.

Our solution is based on the decomposition of DAGs by sources, initially proposed by Robinson [21] to obtain a counting recurrence of labeled DAGs with vertices and a given number of sources. We exploit this decomposition by generating a labeled DAG recursively, at each step generating its sources (and their out-going arcs) by using the values of the counting recurrence as probability distribution. To further illustrate this method, in Appendix A we consider two recently studied subclasses of DAGs, essential DAGs (essDAGs) [1], and extensional DAGs (extDAGs) [15].

Theorem 1

A labeled DAG, an essential DAG, or extensional DAG, with vertices can be generated u.a.r. in time , provided a table of size bits, computable in time , is available.

We then show that, instead of storing the values of the counting recurrence as exact numbers on bits, we can store approximate floating-point numbers with bits for the exponent, and bits for the mantissa. This leads to the first deterministic FPTASes for counting and random generation, as stated in the following theorems, where , , denote the number of labeled DAGs, essential DAGs, and extensional DAGs, respectively, on vertices.

Theorem 2

For any , and for every , we can compute an -bit number such that , , or , in time .

Theorem 3

For any , and for every , we can generate at random a labeled DAG, essential DAG, or extensional DAG, on vertices with probability such that , , or . This can be done in time , provided a table of size bits, computable in time , is available.

Notice how, since , and are less than , then implies full precision, and, in the case of Thm. 3, we get the same running times as in Thm 1.

1.2 Counting a combinatorial ensemble

To illustrate the floating-point approximation scheme for a combinatorial ensemble, we choose the well-known problem of counting 0/1 Knapsack solution. We are given a set of nonnegative integer weights and an integer and are asked how many subsets of elements of sum up to at most . Since this problem is #P-complete, research has focused on approximation algorithms. The first one was a randomized subexponential time algorithm [6] based on near-uniform sampling of feasible solutions by a random walk. A rapidly mixing Markov chain appeared in [16], which provided the first Fully Polynomial Time Randomized Approximation Scheme (FPRAS), for this problem, that had remained open for a some time. An FPRAS with a complexity was given in [5], by combining dynamic programming and rejection sampling. This complexity bound can be improved to by a more sophisticated approach using randomized rounding [5]. Recently, [26, 10] gave the first deterministic FPTAS for this problem, running in time . A weaker result, namely a version of the algorithm of [26], but in which the number of arithmetic operations depends on , appeared in [9] and in the combined extended abstract [10].

The solution in [26] is based on a function defined as the smallest capacity such that there exist at least solutions to the 0/1 Knapsack problem with weights and capacity . The second parameter of is then approximated, and is computed by a dynamic programming algorithm.

We start from the classic pseudo-polynomial dynamic programming algorithm obtained from the recurrence

where is the number of 0/1 Knapsack solutions that use a subset of the items , and their weights sum up to at most . We approximate the values of using floating-point numbers, which leads to a more direct FPTAS, with a much simpler proof, and an easily implementable algorithm. Making the same assumption as [26] that additions on -bit numbers take unit time, we improve [26, 10] as follows:

Theorem 4

For every , and every , for an input , to the 0/1 Knapsack counting problem, we can compute a floating-point number of bits, which satisfies , in time , assuming unit cost additions and comparisons on numbers with bits.

Note that if , our deterministic FPTAS also improves both FPRASes in [5].

Our reasoning is along the following lines. Since the numbers of solutions can be at most , and the values of the dynamic programming are obtained by sequences of successive additions, we can approximate them using floating-point numbers with bits for the exponent and bits for the mantissa. In order to obtain the approximation factor, we will show that the relative error of each approximation of is , for any . To keep the table small, we exploit the fact that the number of different entries in each row of the approximated table is at most .

Recently, the problem of counting 0/1 Knapsack solutions has been extended to a DAG, as follows [14]. Given a DAG with nonnegative arc weights, two vertices and , and a capacity , count how many paths exist from to of total weight at most ; this problem is relevant for various applications in biological sequence analysis, see the references in [14]. This is clearly a generalization of counting 0/1 Knapsack solutions, since given an instance it suffices to construct the DAG having as vertex set, , , and for each , there are two parallel arcs from to , with weights 0 and , respectively.

In [14], the technique of [26] is extended to this problem, and an FPTAS running in time is obtained (inaccurately, the factor is missing from their stated complexity bound). Just as we do for the classical 0/1 Knapsack problem, we start from the basic pseudo-polynomial dynamic programming algorithm extended to a DAG, whose values we approximate using floating-point numbers. We show that we can organize the computation in sequences of successive additions, so that we need floating-point numbers with only bits for the mantissa, and bits for the exponent. This analogously leads to a faster and simpler FPTAS.

Theorem 5

For every , and every , for an input DAG on vertices and arcs, nonnegative arc weights, and a capacity , we can compute an approximation of the number of -paths, in time , assuming unit cost additions and comparisons on numbers with bits.

2 Approximation by floating-point numbers

Throughout this paper, we assume that the problem instances consist of objects (DAGs with vertices, 0/1 Knapsack instances with objects). Let be such that the maximum numerical value of a particular counting problem is (that is, it can be represented with bits). Any number can be written as

where , , and , for . Under floating-point arithmetic terminology, is called the exponent of , and the binary string is called its mantissa.

We will approximate as a floating-point number which has bits dedicated to store its exponent exactly, but only bits dedicated to store the first bits of its mantissa; that is, we approximate by the number

We will often drop the subscript when this will be clear from the context. For sure, we will choose , since the contrary cannot help.

For every , it holds that

(1)

Let and be two floating-point numbers with bits for the exponent and bits for the mantissa. We denote the sum by , and the product by . We assume that we can compute with a bit complexity of ; if additions on -bit numbers take unit time, then we assume we can compute with a word complexity of . Let us denote by the slowdown factor of a multiplication algorithm on -bit numbers; for example, the Schönhage-Strassen algorithm [22] multiplies two -bit numbers in time , and we get . Accordingly, we assume that we can compute with bit operations.

If are such that , and are two floating-point numbers with bits for the exponent and bits for the mantissa such that

for some integers , then by (1) the following inequalities hold

(2)
(3)

For each particular problem, we will choose as a function of and of the error factor , . For the problem of random generation of DAGs we have , and we take ; in the case of counting 0/1 Knapsack solutions and , while for its extension on a DAG, and .

3 Random generation of DAGs

In Sec. 3.1 we present the well-known decomposition of labeled DAGs by sources [21], and turn it into a deterministic random generation algorithm. In Sec. 3.2, we show how to approximate the numerical values of the counting recurrence, and argue that the resulting random generation algorithm is an FPTAS of lower complexity.

3.1 Exact u.a.r. generation of DAGs by sources

For , let denote the number of labeled DAGs with vertices, out of which precisely are sources. Then is the number of labeled DAGs on vertices. In [21] a simple decomposition of DAGs by sources delivers the following counting recurrence:

(4)

where , for all . Indeed, there are ways to choose the sources, and by removing the sources we obtain a DAG with vertices and sources, . Each of these sources must have a non-empty in-neighborhood included in the set of removed vertices, while the other vertices can have arbitrary in-neighbors among these vertices.

In order to generate u.a.r. a DAG having the set as vertex set, recurrence (4) suggests the following recursive algorithm. Choose the number of its sources with probability . Then, choose u.a.r. the sources , and call the recursive algorithm for . Finally, connect with the graph returned by the recursive call, as indicated by the proof of (4); see Algorithm 1.

5 ;
6 if then return ;
7 choose with probability ;
8 choose u.a.r. a -subset ;
9 ;
10 ; ;
11 foreach  do
12        a non-empty subset of chosen u.a.r.;
13       
14foreach  do
15        a subset of chosen u.a.r.;
16       
return .
2Returns a random DAG on vertex set , dubbed , together with the set of its sources.
The table of values is either computed exactly or approximately, according to recurrence (4).
Algorithm 1 randomGenerateDAG()
4Returns a random DAG on vertex set , dubbed , together with the set of its sources.
The table of values is either computed exactly or approximately, according to recurrence (4).

In order to choose a number with probability , we can choose u.a.r. , and then take as the smallest integer such that . For every , we can store a patricia trie containing values , for all ; is found by a successor query in the patricia trie.

The asymptotic behavior of is where and  [2, 3]. Therefore, we need bits to store each . In order to compute numbers , we assume to have access to pre-computed tables storing numerical values of binomial coefficients, and of all ; number can be computed by setting one bit to 1. Each number can be then computed with additions and multiplications on bits. Therefore, computing the entire table has bit complexity .

For every , the th patricia trie can be constructed with bit operations, uses space bits, and supports successor queries in time ; these are standard considerations in data structures. Therefore, choosing takes time . The second part of the algorithm takes overall time, since each of the arcs of a DAG is introduced at most once. Therefore, we obtain Thm. 1.

3.2 An FPTAS for generating labeled DAGs u.a.r.

Let , , be fixed. Instead of using bits for storing each entry in the table , we use floating-point representations with bits for the exponent and bits for the mantissa.

For each , we approximate by , recursively computed by floating-point additions and multiplications, as:

(5)

where , for all . In order to compute numbers , we assume to have access to tables now storing floating-point approximations with bits for the exponent and bits for the mantissa, with a precision as in (1), of binomial coefficients and of numbers . These floating-point numbers can be obtained from the tables storing their exact values, assumed available in the exact case, by trivially setting the exponent to be the length of the exact number, and by filling in its mantissa by taking the first bits. Number can be represented exactly with the floating-point representation by setting the exponent to , the first bit of the mantissa to 1, and the remaining bits to 0.

Each number can be computed with floating-point additions and multiplications on -bit numbers; thus, the entire table can be computed in time .

The following lemma characterizes the approximation quality of the numbers .

Lemma 1

For any and any , it holds that

  • We prove the first inequality; will follow analogously. We reason by induction on , the claim being clear for . For any , it holds that

    since , by (3) it holds that , and from the inductive hypothesis we have .

    Since the sum goes over from 1 to , we have to do floating-point additions, therefore, by (2),

    We assumed that , therefore this implies, by (3), that

    Since and , we have , which proves the claim, because .

Lemma 1 immediately implies an FPTAS for counting labeled DAGs, as stated by Thm. 2. For completing the proof of Thm. 2, take and use relation (6) below.


Notice that the table of numbers depends only on and . We propose to run Algorithm 1 on the table . We use the same scheme as before for choosing , which now takes time . This is our FPTAS for approximate random generation.

  • Let be a fixed DAG with vertices and assume that , , is the set of sources of , , , is the set of sources of , and so on, until say , with . The probability of generating u.a.r., which is , can also be expressed, as a consequence of Algorithm 1, as

    The probability that randomGenerateDAG is

    Therefore, by Lemma 1, since , holds. If we choose , it holds that

    By standard techniques, for all natural numbers and all , the following hold:

    (6)

4 Counting 0/1 Knapsack solutions

The classic pseudo-polynomial algorithm for counting 0/1 Knapsack solutions defines as the number of Knapsack solutions that use a subset of the items , of weight at most , and computes these values by dynamic programming, using the recurrence

(7)

Indeed, we either use only a subset of items from whose weights sum up to , or use item of weight and a subset of items from whose weights sum up to . This DP algorithm executes additions on -bit numbers and its complexity is . When , this is , whence will be assumed in the following. We will assume, like in [26], that additions and comparisons on numbers with bits have unit cost, which implies the same on -bit numbers.

We also use relation (7) to count, but our numbers, for any , are approximate floating-point numbers with bits for the exponent, and bits for the mantissa (we can assume for simplicity that a solution using all objects has cost greater than , so that for all , ). By the above assumption, we have that additions and comparisons of these floating-point numbers on bits take time .

3 Notation: ;
4 insert the pair into ;
5 for  do
6        construct the bimotonotic containing, for each in , the two pairs:
7                 ;
8                 ;
9        obtain by scanning and dropping a pair if the previous one has the same second component;
10       
return .
An FPTAS for counting 0/1 Knapsack solutions
Algorithm 2 ApproximatelyCountKnapsackSolutions()
An FPTAS for counting 0/1 Knapsack solutions

For every we keep a list, , whose entries are pairs of the form , where is a capacity in and is an approximate floating-point number of solutions. We will refer to the set of first components of the pairs in as the capacities in .

Having , for every we define , where the maximum of an empty set is taken to be .

The first list, , consists of the single pair . After this initialization, while computing from , we maintain the following two invariant properties:

  • is strictly increasing on both components;

  • , for every .

Note that Property (I) implies that the length of is at most the total number of floating-point numbers that can be represented with bits, that is .

We obtain by first building the bimonotonic list which, for every capacity in , contains the following two pairs:

(8)

It may turn out that contains distinct pairs having the same second component. Therefore, in order to assure Property (I), we obtain by pruning away from those pairs when another pair with is present. We summarize this procedure as Algorithm 2. Lemma 2 below shows that we can efficiently construct ; the idea of the proof is to do two linear scans of , each with two pointers, and it is in Appendix B.

Lemma 2

We can compute and from in time .

Lemma 3

Property (I) holds for , that is, for every and every , holds.

  • The claim is clear for . For an arbitrary capacity , let in be such that . From the definition of , we get ; from the fact that the pairs in are of the form (8), we have

    (9)

    Since the capacities in are a subset of the capacities in , and the fact that we have pruned the pairs in by keeping the smallest capacity for every approximate number of solutions corresponding to that capacity, it holds that . Moreover, observe that there is no capacity in such that . Indeed, for assuming the contrary, would be a capacity in , by (8). Since we have chosen as the largest capacity in smaller than , and holds, this implies that was pruned when passing from to ; thus, the two pairs of having and as first components have equal second components. By (9) and the bimonotonicity of , this entails that also the two pairs of having and as first components must have equal second components. This contradicts the fact that satisfies Property (I).

    Therefore, it also holds that . Plugging these two relations into (9) we obtain

    (10)

    From (7), the fact that Property (I) holds for , and from (2), we get that , which shows that Property (I) holds also for .

From Lemma 3, the fact that Property (I) holds, and (6), we finally obtain Thm. 4. Since when , our deterministic FPTAS also runs faster than the Monte Carlo FPRASes in [5], which currently held the record on the whole range, as soon as .


Onwards, we briefly sketch the details on applying this method to the Knapsack problem on a DAG (the full explanation is available in Appendix C.). We can assume that all vertices of the DAG (with vertices and arcs) are reachable from , and all vertices reach . For simplicity, we transform into an equivalent DAG in which every vertex has at most two in-coming arcs, and has vertices and arcs, and the maximum path length (i.e., number of arcs in the path) is . Say that has vertices and let be a topological ordering of them. We now denote by the number of paths that end in and their total weight is at most . If for every node , its in-degree is , its in-neighborhood is , and the weights of the arcs entering are , respectively, relation (7) generalizes to:

(11)

The solution is obtained as . As before, we use (11) to count, keeping at each step approximate floating-point numbers. These numbers still have bits for the exponent, but, since the maximum path length in is , the length of their mantissa will be bits. Additions and comparisons of these floating-point numbers still take the same time as before, namely .

As before, for every , we keep a list, , of pairs [capacity, approximate number of solutions], now of length at most . Analogously, consists of the single pair , and while computing from lists , or and (doable now in time ), we maintain the following two invariants, where denotes the length of the longest path from to :

  • is strictly increasing on both components;

  • , for every .

From these considerations, Thm. 5 immediately follows.

Acknowledgements

This work was partially supported by the Academy of Finland under grant 250345 (CoECGR), and by the European Science Foundation, activity “Games for Design and Verification”. We thank Djamal Belazzougui and Daniel Valenzuela for discussions on data structures for large numbers, and Stephan Wagner for remarks on decomposable structures.

References

  • [1] S. A. Andersson, D. Madigan, and M. D. Perlman, A characterization of Markov equivalence classes for acyclic digraphs, Ann. Statist, (1997), pp. 502–541.
  • [2] E. A. Bender, L. B. Richmond, R. W. Robinson, and N. C. Wormald, The asymptotic number of acyclic digraphs, I, Combinatorica, 6 (1986), pp. 15–22.
  • [3] E. A. Bender and R. W. Robinson, The asymptotic number of acyclic digraphs, II, J. Comb. Theory, 44 (1988), pp. 363–369.
  • [4] A. Denise and P. Zimmermann, Uniform Random Generation of Decomposable Structures Using Floating-Point Arithmetic, Theor. Comput. Sci., 218 (1999), pp. 233–248.
  • [5] M. E. Dyer, Approximate counting by dynamic programming, in STOC, L. L. Larmore and M. X. Goemans, eds., ACM, 2003, pp. 693–699.
  • [6] M. E. Dyer, A. M. Frieze, R. Kannan, A. Kapoor, L. Perkovic, and U. V. Vazirani, A Mildly Exponential Time Algorithm for Approximating the Number of Solutions to a Multidimensional Knapsack Problem, Combinatorics, Probability & Computing, 2 (1993), pp. 271–284.
  • [7] P. Flajolet and R. Sedgewick, Analytic Combinatorics, Cambridge University Press, 2009.
  • [8] S. B. Gillispie and M. D. Perlman, Enumerating Markov Equivalence Classes of Acyclic Digraph Models, in Proc. of the Conf. on Uncertainty In Artificial Intelligence, Morgan Kaufmann, 2001, pp. 171–177.
  • [9] P. Gopalan, A. Klivans, and R. Meka, Polynomial-Time Approximation Schemes for Knapsack and Related Counting Problems using Branching Programs, CoRR, abs/1008.3187 (2010).
  • [10] P. Gopalan, A. Klivans, R. Meka, D. Stefankovic, S. Vempala, and E. Vigoda, An FPTAS for #Knapsack and Related Counting Problems, in FOCS, R. Ostrovsky, ed., IEEE, 2011, pp. 817–826.
  • [11] M. Jerrum, L. G. Valiant, and V. V. Vazirani, Random generation of combinatorial structures from a uniform distribution, Theor. Comput. Sci., 43 (1986), pp. 169–188.
  • [12] G. Melançon, I. Dutour, and M. Bousquet-Mélou, Random generation of directed acyclic graphs, Electronic Notes in Discrete Mathematics, 10 (2001), pp. 202–207.
  • [13] G. Melançon and F. Philippe, Generating connected acyclic digraphs uniformly at random, Inf. Process. Lett., 90 (2004), pp. 209–213.
  • [14] M. Mihalák, R. Šrámek, and P. Widmayer, Counting approximately-shortest paths in directed acyclic graphs, in 11th Workshop on Approximation and Online Algorithms – WAOA 2013, 2013. In press (a prelimiary version is available at http://arxiv.org/abs/1304.6707v2).
  • [15] M. Milanič and A. I. Tomescu, Set graphs. I. Hereditarily finite sets and extensional acyclic orientations, Discrete Applied Mathematics, 161 (2013), pp. 677–690.
  • [16] B. Morris and A. Sinclair, Random walks on truncated cubes and sampling 0-1 knapsack solutions, SIAM J. Comput., 34 (2004), pp. 195–226.
  • [17] R. Peddicord, The number of full sets with elements, Proc. Amer. Math. Soc, 13 (1962), pp. 825–828.
  • [18] J. M. Peña, Approximate Counting of Graphical Models Via MCMC, Journal of Machine Learning Research - Proceedings Track, 2 (2007), pp. 355–362.
  • [19] A. Policriti and A. I. Tomescu, Counting extensional acyclic digraphs, Information Processing Letters, 111 (2011), pp. 787–791.
  • [20] R. Rizzi and A. I. Tomescu, Ranking, unranking and random generation of extensional acyclic digraphs, Inf. Process. Lett., 113 (2013), pp. 183–187.
  • [21] R. W. Robinson, Counting labeled acyclic digraphs, in New directions in the Theory of Graphs, F. Harary, ed., Academic Press, NY, 1973, pp. 239–273.
  • [22] A. Schönhage and V. Strassen, Schnelle multiplikation großer zahlen, Computing, 7 (1971), pp. 281–292.
  • [23] B. Steinsky, Efficient coding of labeled directed acyclic graphs, Soft Comput., 7 (2003), pp. 350–356.
  • [24]  , Enumeration of labelled chain graphs and labelled essential directed acyclic graphs, Discrete Mathematics, 270 (2003), pp. 266–277.
  • [25]  , Asymptotic behaviour of the number of labelled essential acyclic digraphs and labelled chain graphs, Graphs and Combinatorics, 20 (2004), pp. 399–411.
  • [26] D. Štefankovič, S. Vempala, and E. Vigoda, A deterministic polynomial-time approximation scheme for counting knapsack solutions, SIAM J. Comput., 41 (2012), pp. 356–366.
  • [27] S. Wagner, Asymptotic enumeration of extensional acyclic digraphs, in ANALCO, C. Martínez and H.-K. Hwang, eds., SIAM, 2012, pp. 1–8.
  • [28] S. Wagner, Asymptotic enumeration of extensional acyclic digraphs, Algorithmica, (2012), pp. 1–19. DOI: 10.1007/s00453-012-9725-4.

Appendix A Random generation of other DAG subclasses

a.1 Essential DAGs

Essential DAGs are used to represent the structure of Bayesian networks [8, 18]. They were counted in [24] by inclusion-exclusion, and their asymptotic behavior was studied in [25]. We give a new counting recurrence for essDAGs, which leads to the first algorithm for generating u.a.r. a labeled essDAG with vertices; this is useful for learning the structure of a Bayesian network from data [8, 18]. This can be turned into an FPTAS, with the same complexity and approximation bounds as in the case of DAGs.

Essential DAGs (essDAGs) are those DAGs with the property that for every edge , the set of in-neighbors of is different from the set of in-neighbors of , minus vertex ; that is, for every it holds that .

Define the depth of a vertex in a DAG as the length of any longest directed path from a source of to . Note that a vertex of maximum depth in must be a sink of (but the converse does not hold). Let us denote by the number of labeled essDAGs with vertices, and in which there are vertices of maximum depth.

Lemma 4

For any and any