Compact Representation of Uncertainty in Hierarchical Clustering
Abstract
Hierarchical clustering is a fundamental task often used to discover meaningful structures in data, such as phylogenetic trees, taxonomies of concepts, subtypes of cancer, and cascades of particle decays in particle physics. When multiple hierarchical clusterings of the data are possible, it is useful to represent uncertainty in the clustering through various probabilistic quantities. Existing approaches represent uncertainty for a range of models; however, they only provide approximate inference. This paper presents dynamicprogramming algorithms and proofs for exact inference in hierarchical clustering. We are able to compute the partition function, MAP hierarchical clustering, and marginal probabilities of subhierarchies and clusters. Our method supports a wide range of hierarchical models and only requires a cluster compatibility function. Rather than scaling with the number of hierarchical clusterings of elements (), our approach runs in time and space proportional to the significantly smaller powerset of . Despite still being large, these algorithms enable exact inference in smalldata applications and are also interesting from a theoretical perspective. We demonstrate the utility of our method and compare its performance with respect to existing approximate methods.
Keywords: Hierarchical Clustering – Jet Physics – Genomics
1 Introduction
The traditional (flat) clustering task is to output a partition of a given dataset. In contrast, the output required in the hierarchical clustering task is a recursive partitioning of the data, i.e., a rooted tree structure where each node in the tree corresponds to a subset of the data, with the data points at the leaves and the nonleaf nodes corresponding to the data at their descendent leaves. The two primary advantages of a recursive, treeshaped partition are that trees, unlike flat partitions, 1) encode structure between the data partitions at multiple levels of granularity, and 2) represent multiple partitions of the dataset at once Dasgupta (2016).
Hierarchical clustering is often used to discover meaningful structures, such as phylogenetic trees of organisms Kraskov et al. (2005), taxonomies of concepts Cimiano and Staab (2005), subtypes of cancer Sørlie et al. (2001), and jets in particle physics Cacciari et al. (2008). Among the reasons that hierarchical clustering has been found to be broadly useful is that it forms a natural data representation of data generated by a Markov tree, i.e., a treeshaped model where the state variables are dependent only on their parent or children.
Often, probabilistic approaches, such as diffusion trees Neal (2003); Knowles and Ghahramani (2011) and coalescent models Teh et al. (2008); Boyles and Welling (2012); Hu et al. (2013), model which tree structures are likely for a given dataset. Inference in these methods is approximate, typically performed with methods including greedy bestfirst, beamsearch, sequential Monte Carlo, and MCMC. While these approaches often discover tree structures using probabilistic inference, the corresponding inference methods do not have efficient ways to compute an exact normalized distribution over all tree structures.
Recent work Kappes et al. (2015); Kohonen and Corander (2016) has studied distributions over flat clusterings and showed that dynamic programming can be used to both efficiently compute the partition function as well as find the MAP clustering Van Os and Meulman (2004); Greenberg et al. (2018). However, analogous results for hierarchical clustering are not known.
In this paper, we provide data structures and algorithms for efficiently computing exact distributions over hierarchical clusterings. We show how these methods can be used to produce not only a normalizing constant for the distribution but also find the MAP clustering, as well as compute marginal distributions, including the probability of any subhierarchy or cluster. We further show how to sample exactly from the posterior distribution over hierarchical clusterings (i.e., the probability of sampling a given hierarchy is equal to the probability of that hierarchy). Our methods require time and space subquadratic in the powerset of , a substantial reduction over brute force methods, which require , making practical exact inference for datasets on the order of 20 points or fewer, thereby benefiting realworld use cases where exact inference on datasets this size is of high interest and practical importance. For example, in jet physics there are practical applications for datasets of this size when implementing simplified models for the cascades of particle decays Soper and Spannowsky (2011, 2013) or on experimental data (after preprocessing methods) from protonproton collisions at the Large Hadron Collider at CERN. We demonstrate our methods’ capabilities for exact inference in discovering cascades of particle decays in jet physics and subtype hierarchies in cancer genomics, two applications where there is a need for exact inference on datasets made feasible by our methods. We find that greedy and beam search methods frequently return estimates with energy well below the MAP clustering, and we quantify exactly various probabilistic quantities of interest in these domains, only made possible by our methods.
2 Background: Uncertainty in Clustering
Hierarchical Clustering is the task of recursively splitting a dataset into subsets until reaching singletons (this can equivalently be viewed as starting with the set of singletons and repeatedly taking the union of sets until reaching the entire dataset). This is in contrast to flat clustering, where the task is to partition the dataset into disjoint subsets. Formally,
{definition}(Hierarchical Clustering
We wish to be able to efficiently compute probability distributions over hierarchical clusterings. Toward that end, we adopt an energybased model of hierarchical clustering, where each hierarchy is assigned a realvalued energy. (Hierarchies are often represented by tree structures; we use the terms hierarchy and tree interchangeably throughout this paper.) The specifics of the energy function are model dependent, and we choose a convention where a higher energy implies a more compatible hierarchy, e.g. the energy could be identified with the likelihood or with a correlation function for the pairing of vertices. However, we want to emphasize that the algorithms introduced in this paper (see section 3 for more details) are independent of the particular form of the energy. Specifically, these algorithms can be straightforwardly implemented on models where the energy of each hierarchy is factorized as a product of positive definite single vertex splittings .
In order to assign the energy of a hierarchy, a linkage function mapping pairs of clusters onto the positive reals
In order to obtain a distribution over the set of hierarchical clusterings, we must compute a normalizer, often called the partition function
To compute , we require a positive definite linkage function
The energy of tree , , is then given by , where . Since , the energy of a hierarchy can be computed by traversing bottom up or top down the hierarchy, without having to consider all possible pairs of clusters in .
Unfortunately, the number of trees over leaves grows very rapidly, namely (see Callan (2009); Dale and Moon (1993) for more details and proof). In fact, even if we consider a lower bound given by
In Greenberg et al. (2018), the authors use the cluster trellis data structure to compute distributions over flat clusterings. In the following section, we describe how to extend this data structure and use it to compute distributions over hierarchical clusterings.
3 Cluster Trellis for Hierarchical Clustering
In order to express a distribution over hierarchical clusterings, we wish to compute the partition function, i.e., the sum of the likelihoods of all possible hierarchical clusterings. We also wish to find the MAP hierarchical clustering, as well as the marginal probabilities of given subhierarchies and clusters.
In this section we describe algorithms for computing the partition function, maximal energy hierarchical clustering, and marginal probabilities, all exactly, in time. The algorithms are based on a cluster data structure, described in Greenberg et al. (2018), that we extend to hierarchical clusterings. We give correctness proofs and analyze the complexity of these algorithms below as well. In Figure 1 we show a visualization for an example implementing the hierarchical trellis to compute these probabilistic quantities over a dataset of four elements. As comparison, brute force methods have to iterate through all clusterings.
Computing the Partition Function
Given a dataset of elements, , the partition function, , for the set of hierarchical clusterings over , , is given by .
This value can be expressed in terms of the partition function for the set of hierarchical clusterings over proper subsets of . In particular, {fact} For any , the hierarchical partition function can be written recursively, as
(1) 
where is the set of all clusters containing the element (omitting X), i.e,. .
See Appendix for Proof. To compute the partition function efficiently, we make use of Fact 3 and our extension of the cluster trellis introduced in Greenberg et al. (2018). Our algorithm works by first setting the partition function of the leaf nodes in the trellis to 1. Then, the partition function for every node in the trellis is computed in order (in a bottomup approach), from the nodes with the smallest number of elements to the nodes with the largest number of elements, memoizing the partition function value at each node. By computing the partition functions in this order, whenever computing the partition function of a given node in the trellis, the partition functions of all of the descendent nodes will have already been computed and memoized. Note that this algorithm can also be expressed recursively. See Algorithm 1 for pseudocode of the recursive version of this algorithm and Figure 1 for a visualization of computing distributions over hierarchical clustering using the trellis.
Since the partition function for each node in the trellis is computed and the descendent nodes’ partition functions are always precomputed due to order of computation, the timecomplexity of the algorithm is , which is significantly smaller than . See Appendix for Proof.
Computing the MAP Hierarchical Clustering
It is possible to compute the MAP clustering for dataset , , defined as , with an approach analogous to the one introduced for the partition function, using the cluster trellis as well as Fact 3 for this purpose.
For any , the MAP clustering can be written recursively,
See Appendix for Proof. As in the partition function algorithm described in Section 3, the time complexity for finding the MAP clustering is . Unlike the partition function algorithm, to compute the maximal likelihood hierarchical clustering, the maximal energy of the subhierarchy rooted at each node is computed, rather than the partition function. Further, pointers to the children of the maximal subhierarchy rooted at each node are stored at that node.
Marginals
It often useful to compute the marginal probabilities of any given subhierarchy, , defined as , as well as the marginal probabilities of any given cluster, , defined as .
Sampling from the Distribution over Trees
In this section we introduce a sampling procedure for hierarchical clusterings implemented over the trellis that samples according to the probability distribution over all hierarchical clusterings . Thus, the probability to sample a hierarchical clustering is given by
(2) 
Given a dataset , after building the trellis and computing the partition function for each subhierarchy rooted at , with , we sample starting from the root . In particular,
Proposition:
A sample can be found recursively starting from the root vertex of the trellis until reaching all singletons . At each level (and branch
(3) 
with . Then, probability of the sample is obtained by multiplying the values obtained at each level. See Appendix for Proof and Algorithm 2 for pseudocode.
4 Experiments
In this section, we conduct experiments demonstrating our approach to compute exactly various probabilistic quantities of importance for real world applications. In the case of the MAP hierarchical clustering, we compare the performance of our trellis algorithm with respect to approximate methods, e.g. greedy and beam search algorithms, and analyze the improvements. In particular, we study these probabilistic quantities for two types of problems, i.e. hierarchical clusterings where the ground truth is provided by a toy model to simulate cascades of particle physics decays in jet physics and modeling subtypes of cancer, which determine prognosis and treatment plans. Also, for the jet physics case, we provide an implementation of the sampling procedure to obtain the posterior distribution, detailed in section 3, and compare the sampled distribution with the expected one calculated using the partition function.
4.1 Jet Physics and Hierarchical Clusterings
The Large Hadron Collider (LHC) at CERN collides two beams of highenergy protons and produces many new (unstable) particles. Some of these new particles (quarks and gluons) will undergo a showering process, where they radiate many other quarks and gluons in successive binary splittings. These splittings can be represented with a binary tree, where the energy of the particles decreases after each step. When the energy is below a given threshold, the showering terminates, resulting in a spray of particles that is called a jet. The particle detectors only observe the leaves of this binary tree (the jet constituents), and the unstable particles in the showering process are unobserved. While the latent showering process is unobserved, it is described by quantum chromodynamics (QCD).
A specific jet could result from several latent trees generated by this showering process. It is natural to represent a jet and the particular clustering history that gave rise to it as a binary tree, where the inner nodes represent each of the unstable particles and the leaves represent the jet constituents. This representation connects jets physics with natural language processing (NLP) and biology, which is exciting and was first suggested in Louppe et al. (2019).
Background
Jets are among the most common objects produced at the LHC, and a great amount of work has been done to develop techniques for a better treatment and understanding of them, from both an experimental and theoretical point of view. In particular, determining the nature (type) of the initial unstable particle (the root of the binary tree), and its children and grandchildren that gave rise to a specific jet is essential in searches for new physics, as well as precision measurements of our current model of nature, i.e., the Standard Model of particle physics. In this context, it becomes relevant and interesting to study algorithms to cluster the jet constituents (leaves) into a binary tree and metrics to compare them. Being able to improve over the current techniques that attempt to invert the showering process to reconstruct the ground truthlevel tree would assist in physics searches at the Large Hadron Collider.
There are software tools called parton showers, e.g., PYTHIA, Herwig, Sherpa, that encode a physics model for the simulation of jets that are produced at the LHC. Current algorithms used by the physics community to estimate the clustering history of a jet are domainspecific sequential recombination jet algorithms, called generalized clustering algorithms Cacciari et al. (2008), and they do not use these generative models. These algorithms sequentially cluster the jet constituents by locally choosing the pairing of nodes that minimizes a distance measure. Given a pair of nodes, this measure depends on the angular distance between their momentum vector and the value of this vector in the transverse direction with respect to the collision axis between the incoming beams of protons.
Currently, generative models that implement the parton shower in full physics simulations are implicit models, i.e., they do not admit a tractable density. Extracting additional information that describes the features of the latent process is relevant to study problems where we aim to unify generation and inference, e.g inverting the generative model to estimate the clustering history of a jet. A schematic representation of this approach is shown in Figure 2.
At present, it is very hard to access the joint likelihood in stateoftheart parton shower generators in full physics simulations. Also, typical implementations of parton showers involve sampling procedures that destroy the analytic control of the joint likelihood. Thus, to aid in machine learning (ML) research for jet physics, a python package for a toy generative model of a parton shower, called Ginkgo, was introduced in Cranmer et al. (2019a). Ginkgo has a tractable joint likelihood, and is as simple and easy to describe as possible but at the same time captures the essential ingredients of parton shower generators in full physics simulations. Within the analogy between jets and NLP, Ginkgo can be thought of as groundtruth parse trees with a known language model. A python package with a pyro implementation of the model with few software dependencies is publicly available in Cranmer et al. (2019a).
Data Methods
In this paper, we proposed a new method to efficiently consider every possible latent structure, find the MAP hierarchical clustering, partition function Z, and compute an estimate for the posterior distribution over all possible hierarchical clusterings from sampling. The generative model described by Ginkgo deviates in a subtle way from the assumptions needed for the trellis procedure to find the exact maximum likelihood tree and partition function over trees. This relates to the requirements that the linkage function only depends on pairs of clusters. In particular, the likelihood function for Gingko can assign different likelihoods to different subtrees over the same cluster. Nevertheless, we find that the trellis procedure is very effective when we identify the energy function for a pair of clusters to be the likelihood associated with the tree resulting from merging the two maximum likelihood trees corresponding to and . The trellis procedure will be exact with respect to hierarchical clusterings from this linkage function, but not exact for a more general type of linkage function that would include pairs of trees. We will compare the trellis results for the MAP hierarchical clustering with approximate methods, as described below.
The ground truth hierarchical clusterings of our dataset are generated with the toy generative model for jets, Ginkgo. This model implements a recursive algorithm to generate a binary tree, whose leaves are the jet constituents, see Cranmer et al. (2019b) for more details. Jet constituents (leaves) and intermediate state particles (inner nodes) in Ginkgo are represented by a 2 dimensional spatial momentum vector , which determines the direction of motion. It is important to mention that in this model, the linkage function depends on the pairs of trees, i.e. the energy for a pairing of clusters depends on the subtree structures rooted at each of those clusters. As a result, we are not guaranteed that the MAP computed with the trellis is the exact one in this particular model. Because of the same reason, the partition function we obtain is not identical to the ground truth distribution, but we can realize a partition function in this model. Thus, we will obtain a partition function and sample from the posterior distribution to show an implementation of the trellis algorithm.
Next, we review new algorithms to cluster jets, based on the joint likelihood of the jet binary tree in Ginkgo, introduced in Cranmer et al. (2020). In this work the authors explore algorithms that aim to obtain the maximum likelihood estimate (MLE) or MAP for the latent structure of a jet, given a set of jet constituents (leaves). In this approach, the tree latent structure is fixed by the algorithm. In particular, greedy and beam search algorithms are studied. The greedy one simply chooses the pairing of nodes that locally maximizes the likelihood at each step. The beam search one maximizes the likelihood of multiple steps before choosing the latent path. The current implementation only takes into account one more step ahead, with a beam size given by , with the number of jet constituents to cluster. Also, when two or more clusterings had an identical likelihood value, only one of them was kept in the beam, to avoid counting multiple times the different orderings of the same clustering (see Boyles and Welling (2012) for details about the different orderings of the internal nodes of the tree). This approach significantly improved the performance of the beam search algorithm in terms of finding the MLE.
Results
In this section we show results for the implementation of the trellis algorithm on a jet physics dataset of 1965 Ginkgo Cranmer et al. (2019a) jets with a number of leaves between 5 and 10, and we refer to it as Ginkgo510
Beam Search  Greedy  
Trellis  2.7 1.9  5.9 4.3 
Beam Search  3.2 3.4  

Even though we use beam search as a baseline to estimate the MAP value, we want to emphasize that it typically has a good performance for trees with up to about 10 leaves, but as we see in Table 1, the trellis MAP value is significantly greater. As already mentioned, the MAP computed with the trellis is not exact in this particular model due to the linkage function depending on the subtree structures rooted at each of the clusters. However, the Trellis allows us to obtain a MAP value computed by recursively considering each subtree MAP value, which covers a much larger set of hierarchies than beam search (the baseline algorithm used here only considers one step ahead).
Next, in Figure 3 we show a plot of the partition function versus the MAP for each set of leaves in Ginkgo510 dataset. It is interesting to note that there seems to be a correlation between and the Trellis MAP, dependent on the number of leaves. We want to emphasize that the implementation of the trellis algorithm allows us to access the partition function.
Finally, we show the implementation of the sampling procedure introduced in section 3. We compare the sampled posterior distribution with respect to the expected one (as explained below), conditioned on a set of five leaves.
We show in Figure 4 the results from sampling 3 hierarchies (black dots) and the expected distribution (green) for the likelihood of each hierarchy. The expected posterior is defined as the probability density function of each possible hierarchy. In principle, this could be obtained by taking the ratio of the likelihood of each hierarchy with respect to the partition function . However, an algorithm that accesses the likelihood of each possible hierarchy would be inefficient and not feasible in practice (the trellis algorithm provides an efficient way to obtain without having to visit each hierarchy). Thus, we opt to take an approximate approach, as follows. If we sample enough number of times, we would expect each possible hierarchy to appear at least once. Thus, as a proof of concept, we sample 3 hierarchies for a set of five leaves (88 different hierarchies), keep only one of them for each unique likelihood value and normalize by and bin size. We show this result in the histogram labeled as Expected (green) in Figure 4. There is an excellent agreement between the sampled and the expected distributions.
4.2 Cancer Genomics and Hierarchical Clusterings
Background
Hierarchical clustering is a common clustering approach for gene expression data (Sørlie et al., 2001). However, standard hierarchical clustering uses a greedy agglomerative or divisive heuristic to build a tree. It is not uncommon to have a need for clustering a small number of samples in cancer genomics studies. An analysis of data available from https://clinicaltrials.gov shows that the median sample size for 7,412 completed phase I clinical trials involving cancer is only 30.
Data and Methods
Here, we compare a greedy agglomerative clustering to our exact MAP clustering tree using the Prediction Analysis of Microarray 50 (pam50) gene expression data set. The pam50 data set (, ) is available from the UNC MicroArray Database (University of North Carolina, 2020). It has intrinsic subtype annotations for 139 of the 232 samples. Missing data values (2.65%) were filled in with zeros. We drew a stratified sample of the total data set with two samples from each known intrinsic subtype and two samples from the unknown group. See Appendix for more details.
Results
Figure 5 displays the greedy hierarchical clustering tree and the MAP tree with transformed weights (see Appendix A.6 for more details) for the twelve samples selected from the pam50 dataset. The main difference between these trees is in the split of the subtree including LumB, HER2, and unknown samples. The greedy method splits HER2 from LumB and unknown, while the MAP tree shows a different topology for this subtree. For the MAP solution, we note that the subtree rooted at is consistent. All of the correlation coefficients among this cluster are positive, so the optimal action is to split off the item with the smallest (positive) correlation coefficient.
5 Related Work
Modeling distributions over tree structures has been the subject of a large body of work. Bayesian nonparametric models typically define a posterior distribution over tree structures given data such as Dirichlet and PitmanYor diffusion trees Neal (2003); Knowles and Ghahramani (2011), coalescent models Teh et al. (2008); Boyles and Welling (2012); Hu et al. (2013), and in the case of grouped data, the nested Chinese restaurant processes Blei et al. (2010) and nested hierarchical Dirichlet processes Paisley et al. (2014). Other models, such as tree structured nested sticking breaking, provide a distribution over a different class of tree structures, one for which data can sit at internal nodes Ghahramani et al. (2010). These methods, while providing a distribution over trees, only support using parametric distributions to define emission probabilities and do not support the general family of linkage functions used in our approach, which can use any scoring function to define the distribution. Factor graphbased distributions over tree structures such as Wick et al. (2012) on the other hand support a flexible class of distributions over tree structures as in our approach. However inference in factor graph models as well as many of the Bayesian nonparameteric models is typically approximate or performed by sampling methods. This lends in practice to approximate MAP solutions and distributions over tree structures. Exact methods like the one proposed in this paper have not, to our knowledge, been proposed.
Dasgupta (2016) defines a cost function for hierarchical clustering. Much work has been done to develop approximate solution methods and related objectives Roy and Pokutta (2017); CohenAddad et al. (2019); Charikar and Chatziafratis (2017); Moseley and Wang (2017); CohenAddad et al. (2017); Charikar et al. (2019).
Bootstrapping methods, such as Suzuki and Shimodaira (2006), represent uncertainty in hierarchical clustering. Unlike our approach, bootstrapping methods approximate statistics of interest through repeatedly (re)sampling from the empirical distribution.
Exact inference and exact distributions over flat clusterings has been considered by Greenberg et al. (2018), which provides the foundation of the dynamic programming approach in this work. Other work on exact inference in flat clustering uses fast convolutions via the Mobius transform and Mobius inversion Kohonen and Corander (2016). (Kappes et al., 2015) produce approximate distributions over flat clusterings using Perturb and MAP Papandreou and Yuille (2011).
6 Conclusion
This paper describes a data structure and dynamicprogramming algorithm to efficiently compute and sample from probability distributions over hierarchical clusterings. Our method improves upon the computation cost of bruteforce methods from to subquadratic in the substantially smaller powerset of . We demonstrate our methods’ utility on jet physics and cancer genomics datasets and show its improvement over approximate methods. Finally, though there are multiple approaches to sample from the posterior distribution, such as MCMC techniques, they could be very expensive while our sampling method has a bounded computation time.
Acknowledgements
KC and SM are supported by the National Science Foundation under the awards ACI1450310 and OAC1836650 and by the MooreSloan data science environment at NYU. PF is supported in part by NSF HDR TRIPODS award 1934846.
Appendix A Appendix
a.1 Proof of Fact 3
Proof.
Given a dataset , pick an element . We consider all possible clusters that contain . Given , then is fixed so as to satisfy and . We want to show that the partition function can be written recursively in terms of and .
The partition function is defined as the sum of the energies of all possible hierarchical clusterings ,
(4) 
where , . Also, and are the subhierarchies in that are rooted at and , respectively.
Next, we rewrite Eq. A.1 grouping together all the hierarchies that have the same clusters
(5) 
with . Thus, of a cluster can be written recursively in terms of the partition function of the subclusters of X
∎
a.2 Proof of Sampling Proposition
Proof.
At level (and branch ) we sample a set in , where and , with probability
(6) 
The probability of the sampled tree is obtained by multiplying the values obtained at each level. Let us consider the contribution from two consecutive levels and , for a subhierarchy rooted at ,
(7) 
We start from the root vertex of the trellis with partition function . We sample level by level (and branch by branch) until we reach the leaves of the tree (singletons). At each new step, the normalizing partition function cancels out with the corresponding one at the previous step (see Eq. A.2). Thus, the probability of the sampled tree can be written as follows:
(8) 
where labels each level, the branches at each level and the partition function for each singleton (). Thus, this sampling procedure follows the probability distribution over the set of all hierarchical clusterings .
∎
a.3 Proof of Lower Bound on Number of Trees
The number of trees on leaves is given exactly by , where is the number of internal nodes in the subtree rooted at node Boyles and Welling (2012). Since , this makes the number of trees on leaves . The smallest conceivable value for , which gives us the bound on the number of trees to be , as desired.
Note that this is a loose lowerbound, and that it could be improved upon as follows: say a hierarchical clustering is a caterpillar clustering is every internal node in the underlying tree has two children and the set associated with one of those children as size one. There are caterpillar clustering. To see this, note that the th level (where the root is level 1) of a caterpillar clustering has exactly one leaf for . There are choices for the corresponding singleton sets.
a.4 Proof of Fact 3
Proof.
We proceed in a similar way as detailed in Appendix 3, as follows. Given a dataset , pick an element . We consider all possible clusters that contain . Given , then is fixed so as to satisfy and . We want to show that the MAP clustering can be computed recursively in terms of and .
The MAP value is defined as the energy of the clustering with maximal energy among all possible hierarchical clusterings ,
(9) 
where , . Also, and are the subhierarchies in that are rooted at and , respectively. As mentioned earlier, the cluster trellis provides an exact MAP solution conditioned on the fact that the domain of the linkage function is the set of pairs of clusters, and not pairs of trees. Thus, we can rewrite Eq. A.4 grouping together all the hierarchies that have the same clusters , as follows
(10) 
with . Thus, of a cluster can be written recursively in terms of the MAP values of the subclusters of X
∎
a.5 Proof of Partition Function Time Complexity
The partition function is computed for each node in the trellis, and due to the order of computation, at the time of computation for node , the partition functions for all nodes in the subtrellis rooted at node i have already been computed. Therefore, the partition function for a node with elements can be computed in steps (given the precomputed partition functions for each of the node’s descendants), since the number of nodes for the trellis rooted at node i (with i elements) corresponds to the powerset of i. There are nodes of size , making the total computation .
a.6 Additional Details on PAM50 Experiments
The Pearson correlation coefficient was used for the clustering metric for the PAM50 data set experiments. The correlation clustering input can be represented as a complete weighted graph, , where each edge has weight . The goal is to construct a clustering of the nodes that maximizes the sum of positive withincluster edge weights minus the sum of all negative acrosscluster edge weights. However, the correlations among subsampled pam50 () data set are all positive. To allow more balanced weights among vertices, we transform the weights using and then subtracted the average of all .
Footnotes
 Corresponding authors: Craig S. Greenberg , Sebastian Macaluso
 csgreenberg@cs.umass.edu, sm4511@nyu.edu
 Technically, this is the definition of binary hierarchical clustering, which we limit our exposition to in this paper. We note that for kary hierarchical clustering, k=2 (i.e., binary clustering) is the most expressive, in that it encodes the greatest number of treeconsistent clusterings. While our algorithms assume a binary tree, they can naturally be extended to kary trees, which is left as future work.
 In cases where the natural linkage function maps to , the linkage value can be exponentiated, as in the Gibbs distribution.
 Our use of an energy is different than the use of the term energy in statistical mechanics. In the Gibbs distribution, the energy for a state is proportional to the logprobability for the system to be in that state.
 We abuse notation by overloading to represent both the energy of a hierarchy, , as well as the linkage function for a pair of clusters, .
 See Appendix for proof of bound on the number of hierarchical clustering.
 To simplify notation we do not explicitly include the branch label in Eq. 3.
 The parameters for this dataset are , and (see Cranmer et al. (2019a) for specific details about the model)
 The cluster trellis provides an exact solution conditioned on the fact that the domain of the linkage function is the set of pairs of clusters, and not pairs of trees.
 Note that for each singleton , we have .
 Note that for each singleton , we have .
References
 The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. Journal of the ACM (JACM). Cited by: §5.
 The timemarginalized coalescent prior for hierarchical clustering. In Advances in Neural Information Processing Systems, pp. 2969–2977. Cited by: §A.3, §1, §4.1, §5.
 The anti jet clustering algorithm. JHEP 04, pp. 063. External Links: Document, 0802.1189 Cited by: §1, §4.1.
 A combinatorial survey of identities for the double factorial. External Links: 0906.1317 Cited by: §A.3, §2.
 Hierarchical clustering better than averagelinkage. In Proceedings of the Thirtieth Annual ACMSIAM Symposium on Discrete Algorithms, pp. 2291–2304. Cited by: §5.
 Approximate hierarchical clustering via sparsest cut and spreading metrics. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms, pp. 841–854. Cited by: §5.
 Learning concept hierarchies from text with a guided agglomerative clustering algorithm. In Proceedings of the ICML 2005 Workshop on Learning and Extending Lexical Ontologies with Machine Learning Methods, Cited by: §1.
 Hierarchical clustering: objective functions and algorithms. Journal of the ACM (JACM) 66 (4), pp. 1–42. Cited by: §5.
 Hierarchical clustering beyond the worstcase. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §5.
 Cited by: §4.1, §4.1, footnote 7.
 Cited by: §4.1.
 Cited by: §4.1.
 The permuted analogues of three Catalan sets. Vol. 34. External Links: Document, 03783758 Cited by: §A.3, §2.
 A cost function for similaritybased hierarchical clustering. In Symposium on Theory of Computing (STOC), Cited by: §1, §5.
 Treestructured stick breaking for hierarchical data. In Advances in neural information processing systems, pp. 19–27. Cited by: §5.
 Compact representation of uncertainty in clustering. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi and R. Garnett (Eds.), pp. 8630–8640. Note: \hrefhttp://papers.nips.cc/paper/8081compactrepresentationofuncertaintyinclustering.pdfhttp://papers.nips.cc/paper/8081compactrepresentationofuncertaintyinclustering.pdf Cited by: §1, §2, §2, §3, §3, §5.
 Binary to bushy: bayesian hierarchical clustering with the beta coalescent. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1, §5.
 Probabilistic correlation clustering and image partitioning using perturbed multicuts. In International Conference on Scale Space and Variational Methods in Computer Vision, Cited by: §1, §5.
 Pitmanyor diffusion trees. In Conference on Uncertainty in Artificial Intelligence (UAI), Cited by: §1, §5.
 Computing exact clustering posteriors with subset convolution. Communications in StatisticsTheory and Methods. Cited by: §1, §5.
 Hierarchical clustering using mutual information. EPL (Europhysics Letters) 70 (2), pp. 278. Cited by: §1.
 QCDAware Recursive Neural Networks for Jet Physics. JHEP 01, pp. 057. External Links: Document, 1702.00748 Cited by: §4.1.
 Approximation bounds for hierarchical clustering: average linkage, bisecting kmeans, and local search. In Advances in Neural Information Processing Systems, Cited by: §5.
 Density modeling and clustering using dirichlet diffusion trees. Bayesian statistics. Cited by: §1, §5.
 Nested hierarchical dirichlet processes. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §5.
 Perturbandmap random fields: using discrete optimization to learn and sample from energy models. In 2011 International Conference on Computer Vision, pp. 193–200. Cited by: §5.
 Hierarchical clustering via spreading metrics. The Journal of Machine Learning Research 18 (1), pp. 3077–3111. Cited by: §5.
 Finding physics signals with shower deconstruction. Phys. Rev. D84, pp. 074002. External Links: Document, 1102.3480 Cited by: §1.
 Finding top quarks with shower deconstruction. Phys. Rev. D87, pp. 054012. External Links: Document, 1211.3140 Cited by: §1.
 Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications. Proceedings of the National Academy of Sciences 98 (19), pp. 10869–10874. Cited by: §1, §4.2.
 Pvclust: an r package for assessing the uncertainty in hierarchical clustering. Bioinformatics 22 (12), pp. 1540–1542. Cited by: §5.
 Bayesian agglomerative clustering with coalescents. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1, §5.
 UNC microarray database. Note: \hrefhttps://genome.unc.edu/https://genome.unc.edu/ Cited by: §4.2.
 Improving dynamic programming strategies for partitioning. Journal of classification 21 (2), pp. 207–230. Cited by: §1.
 A discriminative hierarchical model for fast coreference at large scale. In Association for Computational Linguistics (ACL), Cited by: §5.