1 Introduction
Abstract

We consider the problem of identifying multiway block structure from a large noisy tensor. Such problems arise frequently in applications such as genomics, recommendation system, topic modeling, and sensor network localization. We propose a tensor block model, develop a unified least-square estimation, and obtain the theoretical accuracy guarantees for multiway clustering. The statistical convergence of the estimator is established, and we show that the associated clustering procedure achieves partition consistency. A sparse regularization is further developed for identifying important blocks with elevated means. The proposal handles a broad range of data types, including binary, continuous, and hybrid observations. Through simulation and application to two real datasets, we demonstrate the outperformance of our approach over previous methods.

Keywords: Tensor block model, Clustering, Least-square estimation, Dimension reduction.

Multiway clustering via tensor block models


Yuchen Zeng and Miaoyan Wang111Miaoyan Wang is Assistant Professor, Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: miaoyan.wang@wisc.edu; Yuchen Zheng is a BS/MS student, Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: yzeng58@wisc.edu.

Department of Statistics, University of Wisconsin-Madison

1 Introduction

Higher-order tensors have recently attracted increased attention in data-intensive fields such as neuroscience [1, 2], social networks [3, 4], computer vision [5, 6], and genomics [7, 8]. In many applications, the data tensors are often expected to have underlying block structure. One example is multi-tissue expression data [7], in which genome-wide expression profiles are collected from different tissues in a number of individuals. There may be groups of genes similarly expressed in subsets of tissues and individuals; mathematically, this implies an underlying three-way block structure in the data tensor. In a different context, block structure may emerge in a binary-valued tensor. Examples include multilayer network data [3], with the nodes representing the individuals and the layers representing the multiple types of relations. Here a planted block represents a community of individuals that are highly connected within a class of relationships.

\includegraphics

[width=1]demo.pdf

Figure 1: Examples of tensor block model (TBM). (a) Our TBM method is used for multiway clustering and for revealing the underlying checkerbox structure in a noisy tensor. (b) The sparse TBM method is used for detecting sub-tensors of elevated means.

This paper presents a new method and the associated theory for tensors with block structure. We develop a unified least-square estimation procedure for identifying multiway block structure. The proposal applies to a broad range of data types, including binary, continuous, and hybrid observations. We establish a high-probability error bound for the resulting estimator, and show that the procedure enjoys consistency guarantees on the block structure recovery as the dimension of the data tensor grows. Furthermore, we develop a sparse extension of the tensor block model for block selections. Figure 1 shows two immediate examples of our method. When the data tensor possesses a checkerbox pattern modulo some unknown reordering of entries, our method amounts to multiway clustering that simultaneously clusters each mode of the tensor (Figure 1a). When the data tensor has no full checkerbox structure but contains a small numbers of sub-tensors of elevated means, we develop a sparse version of our method to detect these sub-tensors of interest (Figure 1b).

Related work. Our work is closely related to, but also clearly distinctive from, the low-rank tensor decomposition. A number of methods have been developed for low-rank tensor estimation, including CANDECOMP/PARAFAC (CP) decomposition [9] and Tucker decomposition [10]. The CP model decomposes a tensor into a sum of rank-1 tensors, whereas Tucker model decomposes a tensor into a core tensor multiplied by orthogonal matrices in each mode. In this paper we investigate an alternative block structure assumption, which has yet to be studied for higher-order tensors. Note that a block structure automatically implies low-rankness. However, as we will show in Section 4, a direct application of low rank estimation to the current setting will result in an inferior estimator. Therefore, a full exploitation of the block structure is necessary; this is the focus of the current paper.

Our work is also connected to biclustering and its higher-order extensions [11, 12, 13]. Existing multiway clustering methods [12, 13, 8] typically take a two-step procedure, by first estimating a low-dimension representation of the data tensor and then applying clustering algorithms to the tensor factors. In contrast, our tensor block model takes a single shot to perform estimation and clustering simultaneously. This approach achieves a higher accuracy and an improved interpretability. Moreover, earlier solutions to multiway clustering [14, 12] focus on the algorithm effectiveness, leaving the statistical optimality of the estimators unaddressed. Very recently, Chi et al [15] provides an attempt to study the statistical properties of the tensor block model. We will show that our estimator obtains a faster convergence rate than theirs, and the power is further boosted with a sparse regularity.

2 Preliminaries

We begin by reviewing a few basic factors about tensors [16]. We use to denote an order- -dimensional tensor. The multilinear multiplication of a tensor by matrices is defined as

which results in an order- tensor -dimensional tensor. For any two tensors , of identical order and dimensions, their inner product is defined as . The Frobenius norm of tensor is defined as ; it is the Euclidean norm of regarded as an -dimensional vector. A fiber of is an order-(-) sub-tensor of obtained by holding the index in one mode fixed while letting other indices vary.

A clustering of objects is a partition of the index set into disjoint non-empty subsets. We refer to the number of clusters, , as the clustering size. Equivalently, the clustering (or partition) can be represented using the “membership matrix”. A membership matrix is an incidence matrix whose -entry is 1 if and only if the element belongs to the cluster , and 0 otherwise. Throughout the paper, we will use the terms “clustering”, “partition”, and “membership matrix” exchangeably. For a higher-order tensor, the concept of index partition applies to each of the modes. A block is a sub-tensor induced by the index partitions along each of the modes. We use the term “cluster” to refer to the marginal partition on mode , and reserve the term “block” for the multiway partition of the tensor.

We use lower-case letters () for scalars and vectors, upper-case boldface letters () for matrices, and calligraphy letter () for tensors of order 3 or greater. We say that an event occurs “with high probability” if tends to 1 as the dimension tends to infinity. We say that occurs “with very high probability” if tends to 1 faster than any polynomial of .

3 Tensor block model

Let denote an order-, -dimensional data tensor. The main assumption of tensor block model (TBM) is that the observed data tensor is a noisy realization of an underlying tensor that exhibits a checkerbox structure (see Figure 1a). Specifically, suppose that the -th mode of the tensor consists of clusters. If the tensor entry belongs to the block determined by the th cluster in the mode for , then we assume that

(1)

where is the mean of the tensor block indexed by , and ’s are independent, mean-zero noise terms to be specified later. Our goal is to (i) find the clustering along each of the modes, and (ii) estimate the block means , such that a corresponding blockwise-constant checkerbox structure emerges in the data tensor.

The tensor block model (1) falls into a general class of non-overlapping, constant-mean clustering models [17], in that each tensor entry belongs to exactly one block with a common mean. The TBM can be equivalently expressed as a special tensor Tucker model,

(2)

where is a core tensor consisting of block means, is a membership matrix indicating the block allocations along mode for , and is the noise tensor. We view the TBM (2) as a super-sparse Tucker model, in the sense that the each column of consists of one copy of 1’s and massive 0’s.

We make a general assumption on the noise tensor . The noise terms ’s are assumed to be independent, mean-zero -subgaussian, where is the subgaussianity parameter. More precisely,

(3)

Th assumption (3) incorporates common situations such as Gaussian noise, Bernoulli noise, and noise with bounded support. In particular, we consider two important examples of the TBM:

Example 1 (Gaussian tensor block model).

Let be a continuous-valued tensor. The Gaussian tensor block model (GTBM) is a special case of model (1), with the subgaussianity parameter equal to the error variance. The GTBM serves as the foundation for many tensor clustering algorithms [14, 7, 15].

Example 2 (Stochastic tensor block model).

Let be a binary-valued tensor. The stochastic tensor block model (STBM) is a special case of model (1), with the subgaussianity parameter equal to . The STBM can be viewed as an extension, to higher-order tensors, of the popular stochastic block model [18, 19] for matrix-based network analysis.

More generally, our model also applied to hybrid error distributions, in which different types of distribution are allowed for different portions of the tensor. This scenario may happen, for example, when the data tensor represents concatenated measurements from multiple data sources.

Before we discuss the estimation, we present the identifiability of the TBM.

Assumption 1 (Irreducible core).

The core tensor is called irreducible if it cannot be written as a block tensor with the number of mode- clusters smaller than , for any .

In the matrix case , the irreducibility is equivalent to saying that has no two identical rows and no two identical columns. In the higher-order case, the assumption requires that none of order-(-) fibers of are identical. Note that irreducibility is a weaker assumption than full-rankness.

Proposition 1 (Identifiability).

Consider a Gaussian or Bernoulli TBM (1). Under Assumption 1, the factor matrices ’s are identifiable up to permutations of cluster labels.

The identifiability property for the TBM outperforms that for the classical factor model [20, 21]. In the Tucker [22, 16] and many other factor analyses [20, 21], the factors are identifiable only up to orthogonal rotations. Those models recover only the (column) space spanned by , but not the individual factors. In contrast, our model does not suffer from rotational invariance, and as we show in Section 4, every individual factor is consistently estimated in high dimensions. This brings a benefit to the interpretation of factors in the tensor block model.

We propose a least-square approach for estimating the TBM. Let denote the mean signal tensor with block structure. The mean tensor is assumed to belong to the following parameter space

In the following theoretical analysis, we assume the clustering size is known and simply write for short. The adaptation of unknown will be addressed in Section 5.2. The least-square estimator for the TBM (1) is

(4)

The objective is equal (ignoring constants) to the sum of squares and hence the name of our estimator.

4 Statistical convergence

In this section, we establish the convergence rate of the least-squares estimator (4). While the loss function corresponds to the likelihood for the Gaussian tensor model, the same assertion does not hold for other types of distribution such as stochastic tensor block model. Surprisingly, we will show that, with very high probability, a simple least-square estimator achieves a nearly optimal convergence rate in a general class of block tensor models.

We define the estimation accuracy using the mean squared error (MSE):

(5)

where are the true and estimated mean tensors, respectively.

Theorem 1 (Convergence rate).

Let be the least-square estimator of under model (1). There exists two constants such that,

(6)

holds with probability at least uniformly over and all error distribution satisfying (3).

The convergence rate in (6) consists of two parts. The first part is the number of parameters in the core tensor , while the second part reflects the the complexity for estimating ’s. It is the price that one has to pay for not knowing the locations of the blocks.

We compare our bound with existing literature. The Tucker tensor decomposition has a minimax convergence rate proportional to  [22], where is the multilinear rank in the mode . Applying Tucker decomposition to the TBM yields , because the mode- rank is bounded by the number of mode- clusters. Now, as both the dimension and clustering size tend to infinity, we have . Therefore, by fully exploiting the block structure, we obtain a better convergence rate than previously possible.

Recently,  [15] proposed a convex relaxation for estimating the TBM. In the special case when the tensor dimensions are equal at every mode , their estimator has a convergence rate of order for all . As we see from (6), our estimate obtains a much better convergence rate , which is especially favorable as the order increases.

The bound (6) generalizes the previous results on structured matrix estimation in network analysis [23, 19]. Earlier work [19, 24] suggests the following heuristics on the sample complexity for the matrix case:

(7)

Our result supports this important principle for general . Note that, in the TBM, the sample size is the total number of entries , the number of parameters is , and the combinatoric complexity for estimating block structure is of order .

We next study the clustering consistency of our method. Let be two membership matrices in the mode . We define the misclassification rate as . Here (respectively, ) denotes the cluster label that entry belongs to, based on the partition induced by (respectively, ).

Theorem 2 (Clustering consistency).

Consider a TBM with , . Define the block-mean gap , where denotes the tensor coordinates except the -th mode. Let be the true mode- membership matrix and the estimator from (4). Suppose as , , and . Then, the proportions of misclassified entries go to zero in probability; i.e. there exist permutation matrices ’s such that

The above theorem shows that our estimator consistently recovers the block structure as the dimension of the data tensor grows.

5 Numerical implementation

5.1 Alternating optimization

We introduce an alternating optimization for solving (4). Estimating consists of finding both the core tensor and the membership matrices ’s. The optimization (4) can be written as

where

The decision variables consist of blocks of variables, one for the core tensor and for the membership matrices ’s. We notice that, if any out of the blocks of variables are known, then the last block of variables can be solved explicitly. This observation suggests that we can iteratively update one block of variables at a time while keeping others fixed. Specifically, given the collection of ’s, the core tensor estimate consists of the sample averages of each tensor block. Given the block mean and membership matrices, the last membership matrix can be solved using a simple nearest neighbor search over only discrete points. The full procedure is described in Algorithm 1.

1:Data tensor , clustering size .
2:Block mean tensor , and the membership matrices ’s.
3:Initialize the marginal clustering by performing independent -means on each of the modes.
4:repeat
5:     Update the core tensor . Specifically, for each ,
(8)
where denotes the indices that belong to the th cluster in the mode , and denotes the number of entries in the block indexed by .
6:     for  in  do
7:         Update the mode- membership matrix . Specifically, for each , assign the cluster label :
where denotes the tensor coordinates except the -th mode.
8:     end for
9:until Convergence
Algorithm 1 Multiway clustering based on tensor block models

Algorithm 1 can be viewed as a higher-order extension of the ordinary (one-way) -means algorithm. The core tensor serves as the role of centroids. As each iteration reduces the value of the objective function, which is bounded below, convergence of the algorithm is guaranteed. The per-iteration computational cost scales linearly with the sample size, , and this complexity matches the classical tensor methods [25, 26, 22]. We recognize that obtaining the global optimizer for such a non-convex optimization is typically difficult [27, 2]. Following the common practice in non-convex optimization [2], we run the algorithm multiple times, using random initializations with independent one-way -means on each of the modes.

5.2 Tuning parameter selection

Algorithm 1 takes the number of clusters as an input. In practice such information is often unknown and needs to be estimated from the data . We propose to select this tuning parameter using Bayesian information criterion (BIC),

(9)

where is the effective number of parameters in the model. In our case we take , which is inspired from (7). We choose that minimizes via grid search. Our choice of BIC aims to balance between the goodness-of-fit for the data and the degree of freedom in the population model. We test its empirical performance in Section 7.

6 Extension to sparse estimation

In some large-scale applications, not every block in a data tensor is of equal importance. For example, in the genome-wise expression data analysis, only a few entries represent the signals while the majority come from the background noise (see Figure 1b). While our estimator (4) is still able to handle this scenario by assigning small values to some of the ’s, the estimates may suffer from high variance. It is thus beneficial to introduce regularized estimation for better bias-variance trade-off and improved interpretability.

Here we illustrate the regularized TBM using sparsity on the block means for localizing important blocks in the data tensor. This problem can be formulated as a variable selection on the block parameters. We propose the following regularized least-square estimation:

where is the block-mean tensor, is the penalty function with being an index for the tensor norm, and is the penalty tuning parameter. Some widely used penalties include Lasso penalty , sparse subset penalty , ridge penalty , elastic net (linear combination of and ), among many others.

For parsimony purpose, we only discuss the Lasso and sparse subset penalties; other penalizations can be derived similarly. Sparse estimation incurs slight changes to Algorithm 1. When updating the core tensor in (8), we fit a penalized least square problem with respect to . The closed form for the entry-wise sparse estimate is:

(10)

where and denotes the ordinary least-square estimate in (8). The choice of penalty often depends on the study goals and interpretations in specific applications. Given a penalty function, we select the tuning parameter via BIC (9), where we modify into . Here denotes the number of non-zero entries in the tensor. The empirical performance of this proposal will be evaluated in Section 7.

7 Experiments

In this section, we evaluate the empirical performance of our TBM method. We consider both non-sparse and sparse tensors, and compare the recovery accuracy with other tensor-based methods. Unless otherwise stated, we generate order-3 tensors under the Gaussian tensor block model (1). The block means are generated from i.i.d. Uniform[-3,3]. The entries in the noise tensor are generated from i.i.d. Gaussian . In each simulation study, we report the summary statistics across replications.

7.1 Finite-sample performance

In the first experiment, we assess the empirical relationship between the root mean squared error (RMSE) and the dimension. We set and consider four different settings (see Figure 2). We increase from 20 to 70, and for each choice of , we set the other two dimensions such that . Recall that our theoretical analysis suggests a convergence rate for our estimator. Figure 2a plots the recovery error versus the dimension . After rescaling the x-axis as in Figure 2b, we find that the RMSE decreases roughly at the rate of , where is the rescaled sample size. This is consistent to our theoretical result. It is observed that tensors with a higher number of blocks tend to yield higher recovery errors, as reflected by the upward shift of the curves as increases. Indeed, a higher means a higher intrinsic dimension of the problem, thus increasing the difficulty of the estimation.

\includegraphics

[width=1]decay.pdf

Figure 2: Estimation error for block tensors with Gaussian noise. Each curve corresponds to a fixed clustering size . (a) Average RMSE against . (b) Average RMSE against rescaled sample size .

In the second experiment, we evaluate the selection performance of our BIC criterion (9). Table 1 reports the selected numbers of clusters under various combinations of dimension , clustering size , and noise . We find that, for the case and , the BIC selection is accurate in the low-to-moderate noise setting. In the high-noise setting with , the selected number of clusters is slightly smaller than the true number, but the accuracy increases when either the dimension increases to or the clustering size reduces to . Within a tensor, the selection seems to be easier for shorter modes with smaller number of clusters. This phenomenon is to be expected, since shorter mode has more effective samples for clustering.

Dimensions True clustering sizes Noise Estimated clustering sizes
4
8
12
4
8
12
4
8
12
Table 1: The simulation results for estimating . Bold number indicates no significant difference between the estimate and the ground truth, based on a -test with a level .

7.2 Comparison with alternative methods

Next, we compare our TBM method with two popular low-rank tensor estimation methods: (i) CP decomposition and (ii) Tucker decomposition. Following the literature [15, 8, 12], we perform the clustering by applying the -means to the resulting factors along each of the modes. We refer to such techniques as CP+-means and Tucker+-means.

We generate noisy block tensors with five clusters on each of the modes, and then assess both the estimation and clustering performance for each method. Note that TBM takes a single shot to perform estimation and clustering simultaneously, whereas CP and Tucker-based methods separate these two tasks in two steps. We use the RMSE to assess the estimation accuracy and use the clustering error rate (CER) to measure the clustering accuracy. The CER is calculated using the disagreements (i.e., one minus rand index) between the true and estimated block partitions in the three-way tensor. For fair comparison, we provide all methods the true number of clusters.

Figure 3a shows that TBM achieves the lowest estimation error among the three methods. The gain in accuracy is more pronounced as the noise grows. Neither CP nor Tucker recovers the signal tensor, although Tucker appears to result in a modest clustering performance (Figure 3b). One possible explanation is that the Tucker model imposes orthogonality to the factors, which make the subsequent -means clustering easier than that for the CP factors. Figure 3b-c shows that the clustering error increases with noise but decreases with dimension. This agrees with our expectation, as in tensor data analysis, a larger dimension implies a larger sample size.

\includegraphics

[width=1]compare

Figure 3: Performance comparison in terms of RMSE and CER. (a) Estimation error against noise for tensors of dimension . (b) Clustering error against noise for tensors of dimension . (c) Clustering error against noise for tensors of dimension .

Sparse case.

We then evaluate the performance when the signal tensor is sparse. The simulated model is the same as before, except that we generate block means from a mixture of zero mass and Uniform[-3,3], with probability (sparsity rate) and respectively. The performance is quantified via the the sparsity error rate, which is the proportion of entries that were incorrectly set to zero or incorrectly set to non-zero. We also report the proportion of true zero’s that were correctly identified (correct zeros).

Table 2 reports the BIC-selected averaged across 50 simulations. We see a substantial benefit obtained by penalization. The proposed is able to guide the algorithm to correctly identify zero’s, while maintaining good accuracy in identifying non-zero’s. The resulting sparsity level is close to the ground truth. The rows with correspond to the three non-sparse algorithms (CP, Tucker, and non-sparse TBM). Because non-sparse algorithms fail to identify zero’s, they show equally poor performance in all selection metrics. Figure 4 shows the estimation error and sparsity error against when . Again, the sparse TBM outperforms the other methods.

\resizebox

1! Sparsity () Noise Penalization Estimated Sparsity Rate Correct Zero Rate Sparsity Error Rate 0.5 4 0.5 8 0(0) 0(0) 0.8 8

Table 2: Sparse TBM for estimating tensors of dimension . The reported is the mean of selected across 50 simulations using proposed BIC criterion. Number in bold indicates no significant difference between the estimate and the ground truth, based on a -test with a level 0.05.
\includegraphics

[width=.38]clustering_404040_sparse \includegraphics[width=.55]clustering_correct_sparse

Figure 4: (a) estimation error and (b) sparse error rate against noise for sparse tensors of dimension when .

7.3 Real data analysis

Lastly, we apply our method on two real datasets. The first dataset is a real-valued tensor, consisting of approximate 1 million expression values from 13 brain tissues, 193 individuals, and 362 genes [7]. We subtracted the overall mean expression from the data, and applied the -penalized TBM to identify important blocks in the resulting tensor. The top blocks exhibit a clear tissues genes specificity. In particular, the top over-expressed block is driven by tissues {Substantia nigra, Spinal cord} and genes {GFAP, MBP}, suggesting their elevated expression across individuals. In fact, GFAP encodes filament proteins for mature astrocytes and MBP encodes myelin sheath for oligodendrocytes, both of which play important roles in the central nervous system [28]. Our method also identifies blocks with extremely negative means (i.e. under-expressed blocks). The top under-expressed block is driven by tissues {Cerebellum, Cerebellar Hemisphere} and genes {CDH9, GPR6, RXFP1, CRH, DLX5/6, NKX2-1, SLC17A8}. The gene DLX6 encodes proteins in the forebrain development [28], whereas cerebellum tissues are located in the hindbrain brain. The opposite spatial function is consistent with the observed under-expression pattern.

The second dataset we consider is the Nations data [3]. This is a binary tensor consisting of 56 political relationships of 14 countries between 1950 and 1965. We note that 78.9% of the entries are zero. Again, we applied the -penalized TBM to identify important blocks in the data. We found that the 14 countries are naturally partitioned into 5 clusters, two representing neutral countries {Brazil, Egypt, India, Israel, Netherlands} and {Burma, Indonesia, Jordan}, one eastern bloc {China, Cuba, Poland, USSA}, and two western blocs, {USA} and {UK}. The relation types are partitioned into 7 clusters, among which the exports-related activities {reltreaties, book translations, relbooktranslations, exports3, relexporsts} and NGO-related activities {relintergovorgs, relngo, intergovorgs3, ngoorgs3} are two major clusters that involve the connection between neutral and western blocs. Other top blocks are described in the Supplement.

8 Conclusion

We have developed a statistical setting for studying the tensor block model. Under the assumption that tensor entries are distributed with a block-specific mean, our estimator achieves a convergence rate which is faster than previously possible. Our TBM method applies to a broad range of data distributions and can handle both sparse and sense data tensor. We demonstrate the benefit of sparse regularity in power of detection. In specific applications, prior knowledge may suggest other regularities for parameters. For example, in the multi-layer network analysis, sometimes it may be reasonable to impose symmetry on the parameters along certain modes. In some other applications, non-negativity of parameter values may be enforced. We leave these directions for future study.

Acknowledgements

This research was supported by the University of Wisconsin-Madison, Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation.

References

  • [1] Fengyu Cong, Qiu-Hua Lin, Li-Dan Kuang, Xiao-Feng Gong, Piia Astikainen, and Tapani Ristaniemi. Tensor decomposition of EEG signals: a brief review. Journal of neuroscience methods, 248:59–69, 2015.
  • [2] Hua Zhou, Lexin Li, and Hongtu Zhu. Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502):540–552, 2013.
  • [3] Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In International Conference on Machine Learning, volume 11, pages 809–816, 2011.
  • [4] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems, pages 926–934, 2013.
  • [5] Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor analyzers. In International Conference on Machine Learning, pages 163–171, 2013.
  • [6] Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye. Tensor completion for estimating missing values in visual data. IEEE transactions on pattern analysis and machine intelligence, 35(1):208–220, 2013.
  • [7] Miaoyan Wang, Jonathan Fischer, and Yun S Song. Three-way clustering of multi-tissue multi-individual gene expression data using constrained tensor decomposition. Annals of Applied Statistics, in press, 2019.
  • [8] Victoria Hore, Ana Viñuela, Alfonso Buil, Julian Knight, Mark I McCarthy, Kerrin Small, and Jonathan Marchini. Tensor decomposition for multiple-tissue gene expression experiments. Nature genetics, 48(9):1094, 2016.
  • [9] Frank L Hitchcock. The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics, 6(1-4):164–189, 1927.
  • [10] Ledyard R Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966.
  • [11] Kean Ming Tan and Daniela M Witten. Sparse biclustering of transposable data. Journal of Computational and Graphical Statistics, 23(4):985–1008, 2014.
  • [12] Tamara G Kolda and Jimeng Sun. Scalable tensor decompositions for multi-aspect data mining. In 2008 Eighth IEEE international conference on data mining, pages 363–372. IEEE, 2008.
  • [13] Chang-Dong Wang, Jian-Huang Lai, and S Yu Philip. Multi-view clustering based on belief propagation. IEEE Transactions on Knowledge and Data Engineering, 28(4):1007–1021, 2015.
  • [14] Stefanie Jegelka, Suvrit Sra, and Arindam Banerjee. Approximation algorithms for tensor clustering. In International Conference on Algorithmic Learning Theory, pages 368–383. Springer, 2009.
  • [15] Eric C Chi, Brian R Gaines, Will Wei Sun, Hua Zhou, and Jian Yang. Provable convex co-clustering of tensors. arXiv preprint arXiv:1803.06518, 2018.
  • [16] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009.
  • [17] Sara C Madeira and Arlindo L Oliveira. Biclustering algorithms for biological data analysis: a survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 1(1):24–45, 2004.
  • [18] Emmanuel Abbe. Community detection and stochastic block models: recent developments. The Journal of Machine Learning Research, 18(1):6446–6531, 2017.
  • [19] Chao Gao and Zongming Ma. Minimax rates in network analysis: Graphon estimation, community detection and hypothesis testing. arXiv preprint arXiv:1811.06055, 2018.
  • [20] Robin A Darton. Rotation in factor analysis. Journal of the Royal Statistical Society: Series D (The Statistician), 29(3):167–194, 1980.
  • [21] Hervé Abdi. Factor rotations in factor analyses. Encyclopedia for Research Methods for the Social Sciences, Sage: Thousand Oaks, pages 792–795, 2003.
  • [22] Anru Zhang and Dong Xia. Tensor SVD: Statistical and computational limits. IEEE Transactions on Information Theory, 2018.
  • [23] Chao Gao, Yu Lu, Zongming Ma, and Harrison H Zhou. Optimal estimation and completion of matrices with biclustering structures. The Journal of Machine Learning Research, 17(1):5602–5630, 2016.
  • [24] Phillippe Rigollet and Jan-Christian Hütter. High dimensional statistics. Lecture notes for course 18S997, 2015.
  • [25] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. The Journal of Machine Learning Research, 15(1):2773–2832, 2014.
  • [26] Miaoyan Wang and Yun Song. Tensor decompositions via two-mode higher-order SVD (HOSVD). In Artificial Intelligence and Statistics, pages 614–622, 2017.
  • [27] Daniel Aloise, Amit Deshpande, Pierre Hansen, and Preyas Popat. NP-hardness of Euclidean sum-of-squares clustering. Machine learning, 75(2):245–248, 2009.
  • [28] Nuala A O’Leary, Mathew W Wright, J Rodney Brister, Stacy Ciufo, Diana Haddad, Rich McVeigh, Bhanu Rajput, Barbara Robbertse, Brian Smith-White, Danso Ako-Adjei, et al. Reference sequence (refseq) database at ncbi: current status, taxonomic expansion, and functional annotation. Nucleic acids research, 44(D1):D733–D745, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
374179
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description