Abstract
We consider the problem of identifying multiway block structure from a large noisy tensor. Such problems arise frequently in applications such as genomics, recommendation system, topic modeling, and sensor network localization. We propose a tensor block model, develop a unified leastsquare estimation, and obtain the theoretical accuracy guarantees for multiway clustering. The statistical convergence of the estimator is established, and we show that the associated clustering procedure achieves partition consistency. A sparse regularization is further developed for identifying important blocks with elevated means. The proposal handles a broad range of data types, including binary, continuous, and hybrid observations. Through simulation and application to two real datasets, we demonstrate the outperformance of our approach over previous methods.
Keywords: Tensor block model, Clustering, Leastsquare estimation, Dimension reduction.
Multiway clustering via tensor block models
Miaoyan Wang and Yuchen Zeng^{1}^{1}1Miaoyan Wang is Assistant Professor, Department of Statistics, University of WisconsinMadison, Madison, WI 53706, Email: miaoyan.wang@wisc.edu; Yuchen Zheng is a BS/MS student, Department of Statistics, University of WisconsinMadison, Madison, WI 53706, Email: yzeng58@wisc.edu.
Department of Statistics, University of WisconsinMadison
1 Introduction
Higherorder tensors have recently attracted increased attention in dataintensive fields such as neuroscience [1], social networks [2], computer vision [3], and genomics [4, 5]. In many applications, the data tensors are often expected to have underlying block structure. One example is multitissue expression data [4], in which genomewide expression profiles are collected from different tissues in a number of individuals. There may be groups of genes similarly expressed in subsets of tissues and individuals; mathematically, this implies an underlying threeway block structure in the data tensor. In a different context, block structure may emerge in a binaryvalued tensor. Examples include multilayer network data [2], with the nodes representing the individuals and the layers representing the multiple types of relations. Here a planted block represents a community of individuals that are highly connected within a class of relationships.
This paper presents a new method and the associated theory for tensors with block structure. We develop a unified leastsquare estimation procedure for identifying multiway block structure. The proposal applies to a broad range of data types, including binary, continuous, and hybrid observations. We establish a highprobability error bound for the resulting estimator, and show that the procedure enjoys consistency guarantees on the block structure recovery as the dimension of the data tensor grows. Furthermore, we develop a sparse extension of the tensor block model for block selections. Figure 1 shows two immediate examples of our method. When the data tensor possesses a checkerbox pattern modulo some unknown reordering of entries, our method amounts to multiway clustering that simultaneously clusters each mode of the tensor (Figure 1a). When the data tensor has no full checkerbox structure but contains a small numbers of subtensors of elevated means, we develop a sparse version of our method to detect these subtensors of interest (Figure 1b).
Related work. Our work is closely related to, but also clearly distinctive from, the lowrank tensor decomposition. A number of methods have been developed for lowrank tensor estimation, including CANDECOMP/PARAFAC (CP) decomposition [6] and Tucker decomposition [7]. The CP model decomposes a tensor into a sum of rank1 tensors, whereas Tucker model decomposes a tensor into a core tensor multiplied by orthogonal matrices in each mode. In this paper we investigate an alternative block structure assumption, which has yet to be studied for higherorder tensors. Note that a block structure automatically implies lowrankness. However, as we will show in Section 4, a direct application of low rank estimation to the current setting will result in an inferior estimator. Therefore, a full exploitation of the block structure is necessary; this is the focus of the current paper.
Our work is also connected to biclustering [8] and its higherorder extensions [9, 10]. Existing multiway clustering methods [9, 10, 5, 11] typically take a twostep procedure, by first estimating a lowdimension representation of the data tensor and then applying clustering algorithms to the tensor factors. In contrast, our tensor block model takes a single shot to perform estimation and clustering simultaneously. This approach achieves a higher accuracy and an improved interpretability. Moreover, earlier solutions to multiway clustering [12, 9] focus on the algorithm effectiveness, leaving the statistical optimality of the estimators unaddressed. Very recently, Chi et al [13] provides an attempt to study the statistical properties of the tensor block model. We will show that our estimator obtains a faster convergence rate than theirs, and the power is further boosted with a sparse regularity.
2 Preliminaries
We begin by reviewing a few basic factors about tensors [14]. We use to denote an order dimensional tensor. The multilinear multiplication of a tensor by matrices is defined as
which results in an order tensor dimensional tensor. For any two tensors , of identical order and dimensions, their inner product is defined as . The Frobenius norm of tensor is defined as ; it is the Euclidean norm of regarded as an dimensional vector. The maximum norm of tensor is defined as . An order() slice of is a subtensor of obtained by holding the index in one mode fixed while letting other indices vary.
A clustering of objects is a partition of the index set into disjoint nonempty subsets. We refer to the number of clusters, , as the clustering size. Equivalently, the clustering (or partition) can be represented using the “membership matrix”. A membership matrix is an incidence matrix whose entry is 1 if and only if the element belongs to the cluster , and 0 otherwise. Throughout the paper, we will use the terms “clustering”, “partition”, and “membership matrix” exchangeably. For a higherorder tensor, the concept of index partition applies to each of the modes. A block is a subtensor induced by the index partitions along each of the modes. We use the term “cluster” to refer to the marginal partition on mode , and reserve the term “block” for the multiway partition of the tensor.
3 Tensor block model
Let denote an order, dimensional data tensor. The main assumption of tensor block model (TBM) is that the observed data tensor is a noisy realization of an underlying tensor that exhibits a checkerbox structure (see Figure 1a). Specifically, suppose that the th mode of the tensor consists of clusters. If the tensor entry belongs to the block determined by the th cluster in the mode for , then we assume that
(1) 
where is the mean of the tensor block indexed by , and ’s are independent, meanzero noise terms to be specified later. Our goal is to (i) find the clustering along each of the modes, and (ii) estimate the block means , such that a corresponding blockwiseconstant checkerbox structure emerges in the data tensor.
The tensor block model (1) falls into a general class of nonoverlapping, constantmean clustering models [15], in that each tensor entry belongs to exactly one block with a common mean. The TBM can be equivalently expressed as a special tensor Tucker model,
(2) 
where is a core tensor consisting of block means, is a membership matrix indicating the block allocations along mode for , and is the noise tensor. We view the TBM (2) as a supersparse Tucker model, in the sense that the each column of consists of one copy of 1’s and massive 0’s.
We make a general assumption on the noise tensor . The noise terms ’s are assumed to be independent, meanzero subgaussian, where is the subgaussianity parameter. More precisely,
(3) 
Th assumption (3) incorporates common situations such as Gaussian noise, Bernoulli noise, and noise with bounded support. In particular, we consider two important examples of the TBM:
Example 1 (Gaussian tensor block model).
Example 2 (Stochastic tensor block model).
Let be a binaryvalued tensor. The stochastic tensor block model (STBM) is a special case of model (1), with the subgaussianity parameter equal to . The STBM can be viewed as an extension, to higherorder tensors, of the popular stochastic block model [16, 17] for matrixbased network analysis. In the filed of community detection, multilayer stochastic model has also been developed for multirelational network data analysis [18, 19].
More generally, our model also applied to hybrid error distributions, in which different types of distribution are allowed for different portions of the tensor. This scenario may happen, for example, when the data tensor represents concatenated measurements from multiple data sources.
Before we discuss the estimation, we present the identifiability of the TBM.
Assumption 1 (Irreducible core).
The core tensor is called irreducible if it cannot be written as a block tensor with the number of mode clusters smaller than , for any .
In the matrix case , the irreducibility is equivalent to saying that has no two identical rows and no two identical columns. In the higherorder case, the assumption requires that none of order() slices of are identical. Note that irreducibility is a weaker assumption than fullrankness.
Proposition 1 (Identifiability).
The identifiability property for the TBM outperforms that for the classical factor model [20, 21]. In the Tucker [22, 14] and many other factor analyses [20, 21], the factors are identifiable only up to orthogonal rotations. Those models recover only the (column) space spanned by , but not the individual factors. In contrast, our model does not suffer from rotational invariance, and as we show in Section 4, every individual factor is consistently estimated in high dimensions. This brings a benefit to the interpretation of factors in the tensor block model.
We propose a leastsquare approach for estimating the TBM. Let denote the mean signal tensor with block structure. The mean tensor is assumed to belong to the following parameter space
In the following theoretical analysis, we assume the clustering size is known and simply write for short. The adaptation of unknown will be addressed in Section 5.2. The leastsquare estimator for the TBM (1) is
(4) 
The objective is equal (ignoring constants) to the sum of squares and hence the name of our estimator.
4 Statistical convergence
In this section, we establish the convergence rate of the leastsquares estimator (4) for two measurements. The first measurement is mean squared error (MSE):
(5) 
where are the true and estimated mean tensors, respectively. While the loss function corresponds to the likelihood for the Gaussian tensor model, the same assertion does not hold for other types of distribution such as stochastic tensor block model. We will show that, with very high probability, a simple leastsquare estimator achieves a fast convergence rate in a general class of block tensor models.
Theorem 1 (Convergence rate of MSE).
The convergence rate of MSE in (6) consists of two parts. The first part is the number of parameters in the core tensor , while the second part reflects the the complexity for estimating ’s. It is the price that one has to pay for not knowing the locations of the blocks.
We compare our bound with existing literature. The Tucker tensor decomposition has a minimax convergence rate proportional to [22], where is the multilinear rank in the mode . Applying Tucker decomposition to the TBM yields , because the mode rank is bounded by the number of mode clusters. Now, as both the dimension and clustering size tend to infinity, we have . Therefore, by fully exploiting the block structure, we obtain a better convergence rate than previously possible.
Recently, [13] proposed a convex relaxation for estimating the TBM. In the special case when the tensor dimensions are equal at every mode , their estimator has a convergence rate of order for all . As we see from (6), our estimate obtains a much better convergence rate , which is especially favorable as the order increases.
The bound (6) generalizes the previous results on structured matrix estimation in network analysis [23, 16]. Earlier work [16] suggests the following heuristics on the sample complexity for the matrix case:
(7) 
Our result supports this important principle for general . Note that, in the TBM, the sample size is the total number of entries , the number of parameters is , and the combinatoric complexity for estimating block structure is of order .
Next we study the consistency of partition. To define the misclassification rate (MCR), we need to introduce some additional notation. Let be two mode membership matrices, and be the mode confusion matrix with element , where . Note that the row/column sum of represents the nodes proportion in each cluster defined by or . We restrict ourselves to nondegenerating clusterings; that is, the row/column sums of are lowered bounded by . With a little abuse of notation, we still use to denote the parameter space with the nondegenerating assumption. The leastsquare estimator (4) should also be interpreted with this constraint imposed.
We define the mode misclassification rate (MCR) as
(8) 
In other words, MCR is the elementwise maximum of the confusion matrix after removing the largest entry from each column. Under the nondegenerating assumption, if and only if the confusion matrix is a permutation of a diagonal matrix; that is, the estimated partition matches with the true partition, up to permutations of cluster labels.
Theorem 2 (Convergence rate of MCR).
The above theorem shows that our estimator consistently recovers the block structure as the dimension of the data tensor grows. The blockmean gap serves the role of the eigenseparation as in the classical tensor Tucker decomposition [22]. Table 1 summarizes the comparison of various tensor methods in the special case when and .
5 Numerical implementation
5.1 Alternating optimization
We introduce an alternating optimization for solving (4). Estimating consists of finding both the core tensor and the membership matrices ’s. The optimization (4) can be written as
where 
The decision variables consist of blocks of variables, one for the core tensor and for the membership matrices ’s. We notice that, if any out of the blocks of variables are known, then the last block of variables can be solved explicitly. This observation suggests that we can iteratively update one block of variables at a time while keeping others fixed. Specifically, given the collection of ’s, the core tensor estimate consists of the sample averages of each tensor block. Given the block mean and membership matrices, the last membership matrix can be solved using a simple nearest neighbor search over only discrete points. The full procedure is described in Algorithm 1.
(10) 
Algorithm 1 can be viewed as a higherorder extension of the ordinary (oneway) means algorithm. The core tensor serves as the role of centroids. As each iteration reduces the value of the objective function, which is bounded below, convergence of the algorithm is guaranteed. The periteration computational cost scales linearly with the sample size, , and this complexity matches the classical tensor methods [24, 25, 22]. We recognize that obtaining the global optimizer for such a nonconvex optimization is typically difficult [26, 1]. Following the common practice in nonconvex optimization [1], we run the algorithm multiple times, using random initializations with independent oneway means on each of the modes.
5.2 Tuning parameter selection
Algorithm 1 takes the number of clusters as an input. In practice such information is often unknown and needs to be estimated from the data . We propose to select this tuning parameter using Bayesian information criterion (BIC),
(11) 
where is the effective number of parameters in the model. In our case we take , which is inspired from (7). We choose that minimizes via grid search. Our choice of BIC aims to balance between the goodnessoffit for the data and the degree of freedom in the population model. We test its empirical performance in Section 7.
6 Extension to sparse estimation
In some largescale applications, not every block in a data tensor is of equal importance. For example, in the genomewide expression data analysis, only a few entries represent the signals while the majority come from the background noise (see Figure 1b). While our estimator (4) is still able to handle this scenario by assigning small values to some of the ’s, the estimates may suffer from high variance. It is thus beneficial to introduce regularized estimation for better biasvariance tradeoff and improved interpretability.
Here we illustrate a sparse version of TBM by imposing regularity on block means for localizing important blocks in the data tensor. This problem can be formulated as a variable selection on the block parameters. We propose the following regularized leastsquare estimation:
where is the blockmean tensor, is the penalty function with being an index for the tensor norm, and is the penalty tuning parameter. Some widely used penalties include Lasso penalty , sparse subset penalty , ridge penalty , elastic net (linear combination of and ), among many others.
For parsimony purpose, we only discuss the Lasso and sparse subset penalties; other penalizations can be derived similarly. Sparse estimation incurs slight changes to Algorithm 1. When updating the core tensor in (10), we fit a penalized least square problem with respect to . The closed form for the entrywise sparse estimate is (see Lemma 3 in the Supplements):
(12) 
where and denotes the ordinary leastsquare estimate in (10). The choice of penalty often depends on the study goals and interpretations in specific applications. Given a penalty function, we select the tuning parameter via BIC (11), where we modify into . Here denotes the number of nonzero entries in the tensor. The empirical performance of this proposal will be evaluated in Section 7.
7 Experiments
In this section, we evaluate the empirical performance of our TBM method. We consider both nonsparse and sparse tensors, and compare the recovery accuracy with other tensorbased methods. Unless otherwise stated, we generate Gaussian tensors under the block model (1). The block means are generated from i.i.d. Uniform[3,3]. The entries in the noise tensor are generated from i.i.d. . In each simulation study, we report the summary statistics across replications.
7.1 Finitesample performance
In the first experiment, we assess the empirical relationship between the root mean squared error (RMSE) and the dimension. We set and consider tensors of order 3 and order 4 (see Figure 2). In the case of order3 tensors, we increase from 20 to 70, and for each choice of , we set the other two dimensions such that . Recall that our theoretical analysis suggests a convergence rate for our estimator. Figure 2a plots the recovery error versus the rescaled sample size . We find that the RMSE decreases roughly at the rate of . This is consistent with our theoretical result. It is observed that tensors with a higher number of blocks tend to yield higher recovery errors, as reflected by the upward shift of the curves as increases. Indeed, a higher means a higher intrinsic dimension of the problem, thus increasing the difficulty of the estimation. Similar behavior can be observed in the order4 case from Figure 2b, where the rescaled sample size is .
In the second experiment, we evaluate the selection performance of our BIC criterion (11). Supplementary Table S1 reports the selected numbers of clusters under various combinations of dimension , clustering size , and noise . We find that, for the case and , the BIC selection is accurate in the lowtomoderate noise setting. In the highnoise setting with , the selected number of clusters is slightly smaller than the true number, but the accuracy increases when either the dimension increases to or the clustering size reduces to . Within a tensor, the selection seems to be easier for shorter modes with smaller number of clusters. This phenomenon is to be expected, since shorter mode has more effective samples for clustering.
7.2 Comparison with alternative methods
Next, we compare our TBM method with two popular lowrank tensor estimation methods: (i) CP decomposition and (ii) Tucker decomposition. Following the literature [13, 5, 9], we perform the clustering by applying the means to the resulting factors along each of the modes. We refer to such techniques as CP+means and Tucker+means.
We generate noisy block tensors with five clusters on each of the modes, and then assess both the estimation and clustering performance for each method. Note that TBM takes a single shot to perform estimation and clustering simultaneously, whereas CP and Tuckerbased methods separate these two tasks in two steps. We use the RMSE to assess the estimation accuracy and use the clustering error rate (CER) to measure the clustering accuracy. The CER is calculated using the disagreements (i.e., one minus rand index) between the true and estimated block partitions in the threeway tensor. For fair comparison, we provide all methods the true number of clusters.
Figure 3a shows that TBM achieves the lowest estimation error among the three methods. The gain in accuracy is more pronounced as the noise grows. Neither CP nor Tucker recovers the signal tensor, although Tucker appears to result in a modest clustering performance (Figure 3b). One possible explanation is that the Tucker model imposes orthogonality to the factors, which make the subsequent means clustering easier than that for the CP factors. Figure 3bc shows that the clustering error increases with noise but decreases with dimension. This agrees with our expectation, as in tensor data analysis, a larger dimension implies a larger sample size.
Sparse case. We then evaluate the performance when the signal tensor is sparse. The simulated model is the same as before, except that we generate block means from a mixture of zero mass and Uniform[3,3], with probability (sparsity rate) and respectively. We generate noisy tensors of dimension with varying levels of sparsity and noise. We utilize penalized TBM and primarily focus on the selection accuracy. The performance is quantified via the the sparsity error rate, which is the proportion of entries that were incorrectly set to zero or incorrectly set to nonzero. We also report the proportion of true zero’s that were correctly identified (correct zeros).
Table 2 reports the BICselected averaged across 50 simulations. We see a substantial benefit obtained by penalization. The proposed is able to guide the algorithm to correctly identify zero’s, while maintaining good accuracy in identifying nonzero’s. The resulting sparsity level is close to the ground truth. Supplementary Figure S1 shows the estimation error and sparsity error against when . Again, the sparse TBM outperforms the other methods.
7.3 Real data analysis
Lastly, we apply our method on two real datasets. We briefly summarize the main findings here; the detailed information can be found in the Supplements.
The first dataset is a realvalued tensor, consisting of approximate 1 million expression values from 13 brain tissues, 193 individuals, and 362 genes [4]. We subtracted the overall mean expression from the data, and applied the penalized TBM to identify important blocks in the resulting tensor. The top blocks exhibit a clear tissues genes specificity (Supplementary Table S2). In particular, the top overexpressed block is driven by tissues {Substantia nigra, Spinal cord} and genes {GFAP, MBP}, suggesting their elevated expression across individuals. In fact, GFAP encodes filament proteins for mature astrocytes and MBP encodes myelin sheath for oligodendrocytes, both of which play important roles in the central nervous system [27]. Our method also identifies blocks with extremely negative means (i.e. underexpressed blocks). The top underexpressed block is driven by tissues {Cerebellum, Cerebellar Hemisphere} and genes {CDH9, GPR6, RXFP1, CRH, DLX5/6, NKX21, SLC17A8}. The gene DLX6 encodes proteins in the forebrain development [27], whereas cerebellum tissues are located in the hindbrain brain. The opposite spatial function is consistent with the observed underexpression pattern.
The second dataset we consider is the Nations data [2]. This is a binary tensor consisting of 56 political relationships of 14 countries between 1950 and 1965. We note that 78.9% of the entries are zero. Again, we applied the penalized TBM to identify important blocks in the data. We found that the 14 countries are naturally partitioned into 5 clusters, two representing neutral countries {Brazil, Egypt, India, Israel, Netherlands} and {Burma, Indonesia, Jordan}, one eastern bloc {China, Cuba, Poland, USSA}, and two western blocs, {USA} and {UK}. The relation types are partitioned into 7 clusters, among which the exportsrelated activities {reltreaties, book translations, relbooktranslations, exports3, relexporsts} and NGOrelated activities {relintergovorgs, relngo, intergovorgs3, ngoorgs3} are two major clusters that involve the connection between neutral and western blocs. Other top blocks are described in the Supplement.
We compared the goodnessoffit of various clustering methods on the Brain expression and Nations datasets. Because the code of CoCo method [13] is not yet available, we excluded it from our numerical comparison (See Section 4 for the theoretical comparison with CoCo). Table 3 summarizes the proportion of variance explained by each clustering method:
Our method (TBM) achieves the highest variance proportion, suggesting that the entries within the same cluster are close (i.e., a good clustering). As expected, the sparse TBM results in a slightly lower proportion, because it has a lower model complexity at the cost of small bias. It is remarkable that the sparse TBM still achieves a higher goodnessoffit than others. The improved interpretability with little loss of accuracy makes the sparse TBM appealing in applications.
8 Conclusion
We have developed a statistical setting for studying the tensor block model. Under the assumption that tensor entries are distributed with a blockspecific mean, our estimator achieves a convergence rate which is faster than previously possible. Our TBM method applies to a broad range of data distributions and can handle both sparse and dense data tensor. We demonstrate the benefit of sparse regularity in power of detection. In specific applications, prior knowledge may suggest other regularities for parameters. For example, in the multilayer network analysis, sometimes it may be reasonable to impose symmetry on the parameters along certain modes. In some other applications, nonnegativity of parameter values may be enforced. We leave these directions for future study.
Acknowledgements
This research was supported by NSF grant DMS1915978 and the University of WisconsinMadison, Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation.
Appendix A Proofs
a.1 Stochastic tensor block model
The following property shows that Bernoulli distribution belongs to the subGaussian family with a subgaussianity parameter equal to .
Property 1.
Suppose , then .
Proof.
For all , we have
Therefore . ∎
a.2 Proof of Proposition 1
Proof.
Let denotes the (either Gaussian or Bernoulli) tensor block model, where parameterizes the mean tensor. Since the mapping is onetoone, is identifiable. Now suppose that can be decomposed in two ways, . Based on the Assumption 1, we have
(13) 
where , are two irreducible cores, and are membership matrices for all . We will prove by contradiction that and induce the same partition of , for all .
Suppose the above claim does not hold. Then there exists a mode such that the induce two different partitions of . Without loss of generality, we assume . The definition of partition implies that there exists a pair of indices , , such that, belong to the same cluster based on , but they belong to different clusters based on . Let respectively denote the clusters that and belong to, based on . The lefthand side of (13) implies
(14) 
On the other hand, (13) implies
(15) 
and
(16) 
Combining (14), (15) and (16), we have
(17) 
Equation (17) implies that and can be merged into one cluster. This contradicts the irreducibility assumption of the core tensor . Therefore, and induce a same partition of , and thus they are equal up to permutation of cluster labels. The proof is now complete. ∎
a.3 Proof of Theorem 1
The following lemma is useful for the proof of Theorem 1.
Lemma 1.
Suppose with . Let be the leastsquare estimator of . We have
where and .
Proof.
Based on the definition of leastsquare estimator, we have
(18) 
Combining (18) with the fact
(19) 
yields
Dividing each side by , we have
The desired inequality follows by noting . ∎
Proof of Theorem 1.
To study the performance of the leastsquare estimator , we need to introduce some additional notation. We view the membership matrix as an onto function . With a little abuse of notation, we still use to denote the mapping function and write by convention. We use to denote the collection of membership matrices, and write . For any set , denotes its cardinality. Note that , because each can be identified by a partition of into disjoint nonempty sets.
For ease of notation, we define and . We sometimes identify a tensor in with a vector in . By the definition of the parameter space , the element can be equivalently identified by , where is the collection of membership matrices and is the core tensor. Note that, for a fixed clustering structure , the space consisting of is a linear space of dimension .
By union bound, we have, for any ,
(21) 
for two universal constants . Here the third line follows from [28] (Theorem 1.19) and the fact that lies in a linear space of dimension . The last line uses and . Choosing yields the desired bound. ∎
a.4 Proof of Theorem 2
First we give a list of notation used in the proof. For ease of notation, we allow the basic arithmetic operators (, etc) to be applied to pairs of vectors in an elementwise manner.
a.4.1 Notations
: the mode membership matrix. The element if and only if the th slide in mode belongs to the th cluster.
, : the true and estimated mode cluster membership matrices, respectively.
: the marginal cluster proportion vector listing the relative cluster sizes along the mode . The element denotes the proportion of the th cluster. The cluster proportion vector can be viewed as a function of .
, : the true and estimated mode cluster proportion vectors, respectively.
: the mode confusion matrix between clustering and . The entries in the confusion matrix is . The confusion matrix is a function of and .
: the set of all possible partitions that satisfy the marginal nondegenerating assumption.
: the set of blocks that satisfy the marginal nondegenerating assumption for all ;
: the minimum block size in .
for any tensor .
: the quadratic objective function.
Remark 1.
By definition, the confusion matrix satisfies the following two properties:

,