1 Introduction
Abstract

We consider the problem of identifying multiway block structure from a large noisy tensor. Such problems arise frequently in applications such as genomics, recommendation system, topic modeling, and sensor network localization. We propose a tensor block model, develop a unified least-square estimation, and obtain the theoretical accuracy guarantees for multiway clustering. The statistical convergence of the estimator is established, and we show that the associated clustering procedure achieves partition consistency. A sparse regularization is further developed for identifying important blocks with elevated means. The proposal handles a broad range of data types, including binary, continuous, and hybrid observations. Through simulation and application to two real datasets, we demonstrate the outperformance of our approach over previous methods.

Keywords: Tensor block model, Least-square estimation, Sparsity, Dimension reduction.

Multiway clustering via tensor block models


Miaoyan Wang and Yuchen Zeng111Miaoyan Wang is Assistant Professor, Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: miaoyan.wang@wisc.edu; Yuchen Zeng is a BS/MS student, Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: yzeng58@wisc.edu.

Department of Statistics, University of Wisconsin-Madison

1 Introduction

Higher-order tensors have recently attracted increased attention in data-intensive fields such as neuroscience [1], social networks [2], computer vision [3], and genomics [4, 5]. In many applications, the data tensors are often expected to have underlying block structure. One example is multi-tissue expression data [4], in which genome-wide expression profiles are collected from different tissues in a number of individuals. There may be groups of genes similarly expressed in subsets of tissues and individuals; mathematically, this implies an underlying three-way block structure in the data tensor. In a different context, block structure may emerge in a binary-valued tensor. Examples include multilayer network data [2], with the nodes representing the individuals and the layers representing the multiple types of relations. Here a planted block represents a community of individuals that are highly connected within a class of relationships.

\includegraphics

[width=.8]demo.pdf

Figure 1: Examples of tensor block model (TBM). (a) Our TBM method is used for multiway clustering and for revealing the underlying checkerbox structure in a noisy tensor. (b) The sparse TBM method is used for detecting sub-tensors of elevated means.

This paper presents a new method and the associated theory for tensors with block structure. We develop a unified least-square estimation procedure for identifying multiway block structure. The proposal applies to a broad range of data types, including binary, continuous, and hybrid observations. We establish a high-probability error bound for the resulting estimator, and show that the procedure enjoys consistency guarantees on the block structure recovery as the dimension of the data tensor grows. Furthermore, we develop a sparse extension of the tensor block model for block selections. Figure 1 shows two immediate examples of our method. When the data tensor possesses a checkerbox pattern modulo some unknown reordering of entries, our method amounts to multiway clustering that simultaneously clusters each mode of the tensor (Figure 1a). When the data tensor has no full checkerbox structure but contains a small numbers of sub-tensors of elevated means, we develop a sparse version of our method to detect these sub-tensors of interest (Figure 1b).

Related work. Our work is closely related to, but also clearly distinctive from, the low-rank tensor decomposition. A number of methods have been developed for low-rank tensor estimation, including CANDECOMP/PARAFAC (CP) decomposition [6] and Tucker decomposition [7]. The CP model decomposes a tensor into a sum of rank-1 tensors, whereas Tucker model decomposes a tensor into a core tensor multiplied by orthogonal matrices in each mode. In this paper we investigate an alternative block structure assumption, which has yet to be studied for higher-order tensors. Note that a block structure automatically implies low-rankness. However, as we will show in Section 4, a direct application of low rank estimation to the current setting will result in an inferior estimator. Therefore, a full exploitation of the block structure is necessary; this is the focus of the current paper.

Our work is also connected to biclustering [8] and its higher-order extensions [9, 10]. Existing multiway clustering methods [9, 10, 5, 11] typically take a two-step procedure, by first estimating a low-dimension representation of the data tensor and then applying clustering algorithms to the tensor factors. In contrast, our tensor block model takes a single shot to perform estimation and clustering simultaneously. This approach achieves a higher accuracy and an improved interpretability. Moreover, earlier solutions to multiway clustering [12, 9] focus on the algorithm effectiveness, leaving the statistical optimality of the estimators unaddressed. Very recently, Chi et al [13] provides an attempt to study the statistical properties of the tensor block model. We will show that our estimator obtains a faster convergence rate than theirs, and the power is further boosted with a sparse regularity.

2 Preliminaries

We begin by reviewing a few basic factors about tensors [14]. We use to denote an order- -dimensional tensor. The multilinear multiplication of a tensor by matrices is defined as

which results in an order- tensor -dimensional tensor. For any two tensors , of identical order and dimensions, their inner product is defined as . The Frobenius norm of tensor is defined as ; it is the Euclidean norm of regarded as an -dimensional vector. An order-(-) slice of is a sub-tensor of obtained by holding the index in one mode fixed while letting other indices vary.

A clustering of objects is a partition of the index set into disjoint non-empty subsets. We refer to the number of clusters, , as the clustering size. Equivalently, the clustering (or partition) can be represented using the “membership matrix”. A membership matrix is an incidence matrix whose -entry is 1 if and only if the element belongs to the cluster , and 0 otherwise. Throughout the paper, we will use the terms “clustering”, “partition”, and “membership matrix” exchangeably. For a higher-order tensor, the concept of index partition applies to each of the modes. A block is a sub-tensor induced by the index partitions along each of the modes. We use the term “cluster” to refer to the marginal partition on mode , and reserve the term “block” for the multiway partition of the tensor.

3 Tensor block model

Let denote an order-, -dimensional data tensor. The main assumption of tensor block model (TBM) is that the observed data tensor is a noisy realization of an underlying tensor that exhibits a checkerbox structure (see Figure 1a). Specifically, suppose that the -th mode of the tensor consists of clusters. If the tensor entry belongs to the block determined by the th cluster in the mode for , then we assume that

(1)

where is the mean of the tensor block indexed by , and ’s are independent, mean-zero noise terms to be specified later. Our goal is to (i) find the clustering along each of the modes, and (ii) estimate the block means , such that a corresponding blockwise-constant checkerbox structure emerges in the data tensor.

The tensor block model (1) falls into a general class of non-overlapping, constant-mean clustering models [15], in that each tensor entry belongs to exactly one block with a common mean. The TBM can be equivalently expressed as a special tensor Tucker model,

(2)

where is a core tensor consisting of block means, is a membership matrix indicating the block allocations along mode for , and is the noise tensor. We view the TBM (2) as a super-sparse Tucker model, in the sense that the each column of consists of one copy of 1’s and massive 0’s.

We make a general assumption on the noise tensor . The noise terms ’s are assumed to be independent, mean-zero -subgaussian, where is the subgaussianity parameter. More precisely,

(3)

Th assumption (3) incorporates common situations such as Gaussian noise, Bernoulli noise, and noise with bounded support. In particular, we consider two important examples of the TBM:

Example 1 (Gaussian tensor block model).

Let be a continuous-valued tensor. The Gaussian tensor block model (GTBM) is a special case of model (1), with the subgaussianity parameter equal to the error variance. The GTBM serves as the foundation for many tensor clustering algorithms [12, 4, 13].

Example 2 (Stochastic tensor block model).

Let be a binary-valued tensor. The stochastic tensor block model (STBM) is a special case of model (1), with the subgaussianity parameter equal to . The STBM can be viewed as an extension, to higher-order tensors, of the popular stochastic block model [16, 17, 18] for matrix-based network analysis.

More generally, our model also applied to hybrid error distributions, in which different types of distribution are allowed for different portions of the tensor. This scenario may happen, for example, when the data tensor represents concatenated measurements from multiple data sources.

Before we discuss the estimation, we present the identifiability of the TBM.

Assumption 1 (Irreducible core).

The core tensor is called irreducible if it cannot be written as a block tensor with the number of mode- clusters smaller than , for any .

In the matrix case , the irreducibility is equivalent to saying that has no two identical rows and no two identical columns. In the higher-order case, the assumption requires that none of order-(-) slices of are identical. Note that irreducibility is a weaker assumption than full-rankness.

Proposition 1 (Identifiability).

Consider a Gaussian or Bernoulli TBM (1). Under Assumption 1, the factor matrices ’s are identifiable up to permutations of cluster labels.

The identifiability property for the TBM outperforms that for the classical factor model [19, 20]. In the Tucker [21, 14] and many other factor analyses [19, 20], the factors are identifiable only up to orthogonal rotations. Those models recover only the (column) space spanned by , but not the individual factors. In contrast, our model does not suffer from rotational invariance, and as we show in Section 4, every individual factor is consistently estimated in high dimensions. This brings a benefit to the interpretation of factors in the tensor block model.

We propose a least-square approach for estimating the TBM. Let denote the mean signal tensor with block structure. The mean tensor is assumed to belong to the following parameter space

In the following theoretical analysis, we assume the clustering size is known and simply write for short. The adaptation of unknown will be addressed in Section 5.2. The least-square estimator for the TBM (1) is

(4)

The objective is equal (ignoring constants) to the sum of squares and hence the name of our estimator.

4 Statistical convergence

In this section, we establish the convergence rate of the least-squares estimator (4) for two measurements. The first measurement is mean squared error (MSE):

(5)

where are the true and estimated mean tensors, respectively. While the loss function corresponds to the likelihood for the Gaussian tensor model, the same assertion does not hold for other types of distribution such as stochastic tensor block model. We will show that, with very high probability, a simple least-square estimator achieves a fast convergence rate in a general class of block tensor models.

Theorem 1 (Convergence rate of MSE).

Let be the least-square estimator of under model (1). There exists two constants such that,

(6)

holds with probability at least uniformly over and all error distribution satisfying (3).

The convergence rate of MSE in (6) consists of two parts. The first part is the number of parameters in the core tensor , while the second part reflects the the complexity for estimating ’s. It is the price that one has to pay for not knowing the locations of the blocks.

We compare our bound with existing literature. The Tucker tensor decomposition has a minimax convergence rate proportional to  [21], where is the multilinear rank in the mode . Applying Tucker decomposition to the TBM yields , because the mode- rank is bounded by the number of mode- clusters. Now, as both the dimension and clustering size tend to infinity, we have . Therefore, by fully exploiting the block structure, we obtain a better convergence rate than previously possible.

Recently,  [13] proposed a convex relaxation for estimating the TBM. In the special case when the tensor dimensions are equal at every mode , their estimator has a convergence rate of order for all . As we see from (6), our estimate obtains a much better convergence rate , which is especially favorable as the order increases.

The bound (6) generalizes the previous results on structured matrix estimation in network analysis [22, 16]. Earlier work [16] suggests the following heuristics on the sample complexity for the matrix case:

(7)

Our result supports this important principle for general . Note that, in the TBM, the sample size is the total number of entries , the number of parameters is , and the combinatoric complexity for estimating block structure is of order .

Next we study the consistency of partition. To define the misclassification rate (MCR), we need to introduce some additional notation. Let be two mode- membership matrices, and be the mode- confusion matrix with element , where . Note that the row/column sum of represents the nodes proportion in each cluster defined by or . We restrict ourselves to non-degenerating clusterings; that is, the row/column sums of are lowered bounded by . With a little abuse of notation, we still use to denote the parameter space with the non-degenerating assumption. The least-square estimator (4) should also be interpreted with this constraint imposed.

We define the mode- misclassification rate (MCR) as

(8)

In other words, MCR is the element-wise maximum of the confusion matrix after removing the largest entry from each column. Under the non-degenerating assumption, if and only if the confusion matrix is a permutation of a diagonal matrix; that is, the estimated partition matches with the true partition, up to permutations of cluster labels.

Theorem 2 (Convergence rate of MCR).

Consider a tensor block model (2) with sub-Gaussian parameter . Define the minimal gap between the blocks , where . Let be the true mode- membership, be the estimator from (4), and the lower bound of cluster proportions. Then, for any ,

(9)

where is a positive constant.

The above theorem shows that our estimator consistently recovers the block structure as the dimension of the data tensor grows. The block-mean gap serves the role of the eigen-separation as in the classical tensor Tucker decomposition [21]. Table 1 summarizes the comparison of various tensor methods in the special case when and .

\resizebox

! Method Recovery error (MSE) Clustering error (MCR) Block detection (see Section 6) Tucker [21] - No CoCo [13] - No TBM (this paper) Yes

Table 1: Comparison of various tensor decomposition methods.

5 Numerical implementation

5.1 Alternating optimization

We introduce an alternating optimization for solving (4). Estimating consists of finding both the core tensor and the membership matrices ’s. The optimization (4) can be written as

where

The decision variables consist of blocks of variables, one for the core tensor and for the membership matrices ’s. We notice that, if any out of the blocks of variables are known, then the last block of variables can be solved explicitly. This observation suggests that we can iteratively update one block of variables at a time while keeping others fixed. Specifically, given the collection of ’s, the core tensor estimate consists of the sample averages of each tensor block. Given the block mean and membership matrices, the last membership matrix can be solved using a simple nearest neighbor search over only discrete points. The full procedure is described in Algorithm 1.

1:Data tensor , clustering size .
2:Block mean tensor , and the membership matrices ’s.
3:Initialize the marginal clustering by performing independent -means on each of the modes.
4:repeat
5:     Update the core tensor . Specifically, for each ,
(10)
where denotes the indices that belong to the th cluster in the mode , and denotes the number of entries in the block indexed by .
6:     for  in  do
7:         Update the mode- membership matrix . Specifically, for each , assign the cluster label :
where denotes the tensor coordinates except the -th mode.
8:     end for
9:until Convergence
Algorithm 1 Multiway clustering based on tensor block models

Algorithm 1 can be viewed as a higher-order extension of the ordinary (one-way) -means algorithm. The core tensor serves as the role of centroids. As each iteration reduces the value of the objective function, which is bounded below, convergence of the algorithm is guaranteed. The per-iteration computational cost scales linearly with the sample size, , and this complexity matches the classical tensor methods [23, 24, 21]. We recognize that obtaining the global optimizer for such a non-convex optimization is typically difficult [25, 1]. Following the common practice in non-convex optimization [1], we run the algorithm multiple times, using random initializations with independent one-way -means on each of the modes.

5.2 Tuning parameter selection

Algorithm 1 takes the number of clusters as an input. In practice such information is often unknown and needs to be estimated from the data . We propose to select this tuning parameter using Bayesian information criterion (BIC),

(11)

where is the effective number of parameters in the model. In our case we take , which is inspired from (7). We choose that minimizes via grid search. Our choice of BIC aims to balance between the goodness-of-fit for the data and the degree of freedom in the population model. We test its empirical performance in Section 7.

6 Extension to sparse estimation

In some large-scale applications, not every block in a data tensor is of equal importance. For example, in the genome-wide expression data analysis, only a few entries represent the signals while the majority come from the background noise (see Figure 1b). While our estimator (4) is still able to handle this scenario by assigning small values to some of the ’s, the estimates may suffer from high variance. It is thus beneficial to introduce regularized estimation for better bias-variance trade-off and improved interpretability.

Here we illustrate a sparse version of TBM by imposing regularity on block means for localizing important blocks in the data tensor. This problem can be formulated as a variable selection on the block parameters. We propose the following regularized least-square estimation:

where is the block-mean tensor, is the penalty function with being an index for the tensor norm, and is the penalty tuning parameter. Some widely used penalties include Lasso penalty , sparse subset penalty , ridge penalty , elastic net (linear combination of and ), among many others.

For parsimony purpose, we only discuss the Lasso and sparse subset penalties; other penalizations can be derived similarly. Sparse estimation incurs slight changes to Algorithm 1. When updating the core tensor in (10), we fit a penalized least square problem with respect to . The closed form for the entry-wise sparse estimate is (see Lemma 2 in the Supplements):

(12)

where and denotes the ordinary least-square estimate in (10). The choice of penalty often depends on the study goals and interpretations in specific applications. Given a penalty function, we select the tuning parameter via BIC (11), where we modify into . Here denotes the number of non-zero entries in the tensor. The empirical performance of this proposal will be evaluated in Section 7.

7 Experiments

In this section, we evaluate the empirical performance of our TBM method. We consider both non-sparse and sparse tensors, and compare the recovery accuracy with other tensor-based methods. Unless otherwise stated, we generate Gaussian tensors under the block model (1). The block means are generated from i.i.d. Uniform[-3,3]. The entries in the noise tensor are generated from i.i.d. . In each simulation study, we report the summary statistics across replications.

7.1 Finite-sample performance

In the first experiment, we assess the empirical relationship between the root mean squared error (RMSE) and the dimension. We set and consider tensors of order 3 and order 4 (see Figure 2). In the case of order-3 tensors, we increase from 20 to 70, and for each choice of , we set the other two dimensions such that . Recall that our theoretical analysis suggests a convergence rate for our estimator. Figure 2a plots the recovery error versus the rescaled sample size . We find that the RMSE decreases roughly at the rate of . This is consistent to our theoretical result. It is observed that tensors with a higher number of blocks tend to yield higher recovery errors, as reflected by the upward shift of the curves as increases. Indeed, a higher means a higher intrinsic dimension of the problem, thus increasing the difficulty of the estimation. Similar behavior can be observed in the order-4 case from Figure 2b, where the rescaled sample size is .

\includegraphics

[width=.4]figure.pdf \includegraphics[width=.4]rescale_order4.pdf

Figure 2: Estimation error for block tensors with Gaussian noise. Each curve corresponds to a fixed clustering size . (a) Average RMSE against rescaled sample size for order-3 tensors. (b) Average RMSE against rescaled sample size for order-4 tensors.

In the second experiment, we evaluate the selection performance of our BIC criterion (11). Supplementary Table S1 reports the selected numbers of clusters under various combinations of dimension , clustering size , and noise . We find that, for the case and , the BIC selection is accurate in the low-to-moderate noise setting. In the high-noise setting with , the selected number of clusters is slightly smaller than the true number, but the accuracy increases when either the dimension increases to or the clustering size reduces to . Within a tensor, the selection seems to be easier for shorter modes with smaller number of clusters. This phenomenon is to be expected, since shorter mode has more effective samples for clustering.

7.2 Comparison with alternative methods

Next, we compare our TBM method with two popular low-rank tensor estimation methods: (i) CP decomposition and (ii) Tucker decomposition. Following the literature [13, 5, 9], we perform the clustering by applying the -means to the resulting factors along each of the modes. We refer to such techniques as CP+-means and Tucker+-means.

We generate noisy block tensors with five clusters on each of the modes, and then assess both the estimation and clustering performance for each method. Note that TBM takes a single shot to perform estimation and clustering simultaneously, whereas CP and Tucker-based methods separate these two tasks in two steps. We use the RMSE to assess the estimation accuracy and use the clustering error rate (CER) to measure the clustering accuracy. The CER is calculated using the disagreements (i.e., one minus rand index) between the true and estimated block partitions in the three-way tensor. For fair comparison, we provide all methods the true number of clusters.

Figure 3a shows that TBM achieves the lowest estimation error among the three methods. The gain in accuracy is more pronounced as the noise grows. Neither CP nor Tucker recovers the signal tensor, although Tucker appears to result in a modest clustering performance (Figure 3b). One possible explanation is that the Tucker model imposes orthogonality to the factors, which make the subsequent -means clustering easier than that for the CP factors. Figure 3b-c shows that the clustering error increases with noise but decreases with dimension. This agrees with our expectation, as in tensor data analysis, a larger dimension implies a larger sample size.

\includegraphics

[width=.8]compare

Figure 3: Performance comparison in terms of RMSE and CER. (a) Estimation error against noise for tensors of dimension . (b) Clustering error against noise for tensors of dimension . (c) Clustering error against noise for tensors of dimension .

Sparse case. We then evaluate the performance when the signal tensor is sparse. The simulated model is the same as before, except that we generate block means from a mixture of zero mass and Uniform[-3,3], with probability (sparsity rate) and respectively. We generate noisy tensors of dimension with varying levels of sparsity and noise. We utilize -penalized TBM and primarily focus on the selection accuracy. The performance is quantified via the the sparsity error rate, which is the proportion of entries that were incorrectly set to zero or incorrectly set to non-zero. We also report the proportion of true zero’s that were correctly identified (correct zeros).

Table 2 reports the BIC-selected averaged across 50 simulations. We see a substantial benefit obtained by penalization. The proposed is able to guide the algorithm to correctly identify zero’s, while maintaining good accuracy in identifying non-zero’s. The resulting sparsity level is close to the ground truth. Supplementary Figure S1 shows the estimation error and sparsity error against when . Again, the sparse TBM outperforms the other methods.

\resizebox

1! Sparsity () Noise BIC-selected Estimated Sparsity Rate Correct Zero Rate Sparsity Error Rate 0.5 4 136.0(37.5) 0.55(0.04) 1.00(0.02) 0.06(0.03) 0.5 8 439.2(80.2) 0.58(0.06) 0.94(0.08) 0.15(0.07) 0.8 8 458.0(63.3) 0.81(0.15) 0.87(0.16) 0.21(0.13)

Table 2: Sparse TBM for estimating tensors of dimension . The reported statistics are averaged across 50 simulations with standard deviation given in parentheses. Number in bold indicates the ground truth is within 2 standard deviations of the sample average.

7.3 Real data analysis

Lastly, we apply our method on two real datasets. The first dataset is a real-valued tensor, consisting of approximate 1 million expression values from 13 brain tissues, 193 individuals, and 362 genes [4]. We subtracted the overall mean expression from the data, and applied the -penalized TBM to identify important blocks in the resulting tensor. The top blocks exhibit a clear tissues genes specificity (Supplementary Table S2). In particular, the top over-expressed block is driven by tissues {Substantia nigra, Spinal cord} and genes {GFAP, MBP}, suggesting their elevated expression across individuals. In fact, GFAP encodes filament proteins for mature astrocytes and MBP encodes myelin sheath for oligodendrocytes, both of which play important roles in the central nervous system [26]. Our method also identifies blocks with extremely negative means (i.e. under-expressed blocks). The top under-expressed block is driven by tissues {Cerebellum, Cerebellar Hemisphere} and genes {CDH9, GPR6, RXFP1, CRH, DLX5/6, NKX2-1, SLC17A8}. The gene DLX6 encodes proteins in the forebrain development [26], whereas cerebellum tissues are located in the hindbrain brain. The opposite spatial function is consistent with the observed under-expression pattern.

The second dataset we consider is the Nations data [2]. This is a binary tensor consisting of 56 political relationships of 14 countries between 1950 and 1965. We note that 78.9% of the entries are zero. Again, we applied the -penalized TBM to identify important blocks in the data. We found that the 14 countries are naturally partitioned into 5 clusters, two representing neutral countries {Brazil, Egypt, India, Israel, Netherlands} and {Burma, Indonesia, Jordan}, one eastern bloc {China, Cuba, Poland, USSA}, and two western blocs, {USA} and {UK}. The relation types are partitioned into 7 clusters, among which the exports-related activities {reltreaties, book translations, relbooktranslations, exports3, relexporsts} and NGO-related activities {relintergovorgs, relngo, intergovorgs3, ngoorgs3} are two major clusters that involve the connection between neutral and western blocs. Other top blocks are described in the Supplement.

We compared the goodness-of-fit of various clustering methods on the Brain expression and Nations datasets. Because the code of CoCo method [13] is not yet available, we excluded it from our numerical comparison (See Section 4 for the theoretical comparison with CoCo). The Table 3 summarizes the proportion of variance explained by each clustering method:

\resizebox

! Dataset TBM TBM-sparse CP+-means Tucker+-means CoTeC [12] CoCo [13] Brain expression 0.856 0.855 0.576 0.434 0.849 - Nations 0.439 0.433 0.324 0.253 0.419 -

Table 3: Comparison of goodness-of-fit in the Brain expression and Nations datasets.

Our method (TBM) achieves the highest variance proportion, suggesting that the entries within the same cluster are close (i.e., a good clustering). As expected, the sparse TBM results in a slightly lower proportion, because it has a lower model complexity at the cost of small bias. It is remarkable that the sparse TBM still achieves a higher goodness-of-fit than others. The improved interpretability with little loss of accuracy makes the sparse TBM appealing in applications.

8 Conclusion

We have developed a statistical setting for studying the tensor block model. Under the assumption that tensor entries are distributed with a block-specific mean, our estimator achieves a convergence rate which is faster than previously possible. Our TBM method applies to a broad range of data distributions and can handle both sparse and sense data tensor. We demonstrate the benefit of sparse regularity in power of detection. In specific applications, prior knowledge may suggest other regularities for parameters. For example, in the multi-layer network analysis, sometimes it may be reasonable to impose symmetry on the parameters along certain modes. In some other applications, non-negativity of parameter values may be enforced. We leave these directions for future study.

References

  • [1] Hua Zhou, Lexin Li, and Hongtu Zhu. Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502):540–552, 2013.
  • [2] Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In International Conference on Machine Learning, volume 11, pages 809–816, 2011.
  • [3] Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor analyzers. In International Conference on Machine Learning, pages 163–171, 2013.
  • [4] Miaoyan Wang, Jonathan Fischer, and Yun S Song. Three-way clustering of multi-tissue multi-individual gene expression data using constrained tensor decomposition. Annals of Applied Statistics, in press, 2019.
  • [5] Victoria Hore, Ana Viñuela, Alfonso Buil, Julian Knight, Mark I McCarthy, Kerrin Small, and Jonathan Marchini. Tensor decomposition for multiple-tissue gene expression experiments. Nature genetics, 48(9):1094, 2016.
  • [6] Frank L Hitchcock. The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics, 6(1-4):164–189, 1927.
  • [7] Ledyard R Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966.
  • [8] Kean Ming Tan and Daniela M Witten. Sparse biclustering of transposable data. Journal of Computational and Graphical Statistics, 23(4):985–1008, 2014.
  • [9] Tamara G Kolda and Jimeng Sun. Scalable tensor decompositions for multi-aspect data mining. In 2008 Eighth IEEE international conference on data mining, pages 363–372. IEEE, 2008.
  • [10] Chang-Dong Wang, Jian-Huang Lai, and S Yu Philip. Multi-view clustering based on belief propagation. IEEE Transactions on Knowledge and Data Engineering, 28(4):1007–1021, 2015.
  • [11] Miaoyan Wang and Lexin Li. Learning from binary multiway data: Probabilistic tensor decomposition and its statistical optimality. arXiv preprint arXiv:1811.05076, 2018.
  • [12] Stefanie Jegelka, Suvrit Sra, and Arindam Banerjee. Approximation algorithms for tensor clustering. In International Conference on Algorithmic Learning Theory, pages 368–383. Springer, 2009.
  • [13] Eric C Chi, Brian R Gaines, Will Wei Sun, Hua Zhou, and Jian Yang. Provable convex co-clustering of tensors. arXiv preprint arXiv:1803.06518, 2018.
  • [14] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009.
  • [15] Sara C Madeira and Arlindo L Oliveira. Biclustering algorithms for biological data analysis: a survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 1(1):24–45, 2004.
  • [16] Chao Gao and Zongming Ma. Minimax rates in network analysis: Graphon estimation, community detection and hypothesis testing. arXiv preprint arXiv:1811.06055, 2018.
  • [17] Peter J Bickel and Aiyou Chen. A nonparametric view of network models and newman–girvan and other modularities. Proceedings of the National Academy of Sciences, 106(50):21068–21073, 2009.
  • [18] Jing Lei, Alessandro Rinaldo, et al. Consistency of spectral clustering in stochastic block models. The Annals of Statistics, 43(1):215–237, 2015.
  • [19] Robin A Darton. Rotation in factor analysis. Journal of the Royal Statistical Society: Series D (The Statistician), 29(3):167–194, 1980.
  • [20] Hervé Abdi. Factor rotations in factor analyses. Encyclopedia for Research Methods for the Social Sciences, Sage: Thousand Oaks, pages 792–795, 2003.
  • [21] Anru Zhang and Dong Xia. Tensor SVD: Statistical and computational limits. IEEE Transactions on Information Theory, 2018.
  • [22] Chao Gao, Yu Lu, Zongming Ma, and Harrison H Zhou. Optimal estimation and completion of matrices with biclustering structures. The Journal of Machine Learning Research, 17(1):5602–5630, 2016.
  • [23] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. The Journal of Machine Learning Research, 15(1):2773–2832, 2014.
  • [24] Miaoyan Wang and Yun Song. Tensor decompositions via two-mode higher-order SVD (HOSVD). In Artificial Intelligence and Statistics, pages 614–622, 2017.
  • [25] Daniel Aloise, Amit Deshpande, Pierre Hansen, and Preyas Popat. NP-hardness of Euclidean sum-of-squares clustering. Machine learning, 75(2):245–248, 2009.
  • [26] Nuala A O’Leary, Mathew W Wright, J Rodney Brister, Stacy Ciufo, Diana Haddad, Rich McVeigh, Bhanu Rajput, Barbara Robbertse, Brian Smith-White, Danso Ako-Adjei, et al. Reference sequence (refseq) database at ncbi: current status, taxonomic expansion, and functional annotation. Nucleic acids research, 44(D1):D733–D745, 2015.

Acknowledgements

This research was supported by NSF grant DMS-1915978 and the University of Wisconsin-Madison, Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
391964
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description