Tensor Q-Rank: A New Data Dependent Tensor Rank

Tensor Q-Rank: A New Data Dependent Tensor Rank

Hao Kong, Zhouchen Lin,  H. Kong and Z. Lin are with the Key Lab. of Machine Perception (MoE), School of EECS, Peking University, Beijing 100871, P. R. China. Z. Lin is the corresponding author. (e-mails: konghao@pku.edu.cn and zlin@pku.edu.cn).

Recently, the Tensor Nuclear Norm (TNN) regularization based on t-SVD has been widely used in various low tubal-rank tensor recovery tasks. However, these models usually require smooth change of data along the third dimension to ensure their low rank structures. In this paper, we propose a new definition of tensor rank named tensor Q-rank by a column orthonormal matrix , and further make data-dependent. We introduce an explainable selection method of , under which the data tensor may have a more significant low tensor Q-rank structure than that of low tubal-rank structure. We also provide a corresponding envelope of our rank function and apply it to the low rank tensor completion problem. Then we give an effective algorithm and briefly analyze why our method works better than TNN based methods in the case of complex data with low sampling rate. Finally, experimental results on real-world datasets demonstrate the superiority of our proposed model in the tensor completion problem.

tensor rank, tensor low rank recovery, tensor completion, convex optimization.

I Introduction

With the development of data science, the multi-dimensional data structures are becoming more and more complex. The low-rank tensor recovery problem, which aims to recover a low-rank tensor from observed tensor, has also been extensively studied and applied. The problem can be formulated as the following model:


where is the observed measurement by a linear operator and is the clean data.

Generally, it is difficult to solve Eq. (1) directly, and different rank definitions correspond to different models. The commonly used definitions of tensor rank are all related to particular tensor decompositions [excel_9]. For example, CP-rank [excel_28] is based on the CANDECOMP/PARAFAC decomposition [excel_29]; Tucker-n-rank [excel_31] is based on the Tucker Decomposition [excel_30]; and tensor multi-rank and tubal-rank [excel_18] are based on t-SVD [excel_21]. Minimizing the rank function in Eq. (1) directly is usually NP-hard and is difficult to be solved within polynomial time, hence we often replace by its convex/non-convex surrogate function. Similar to the matrix case [excel_7_1, excel_7_2], based on different definitions of tensor singular values, various tensor nuclear norms are proposed as the rank surrogates [excel_1_1, excel_12, excel_21].

I-a Existing Methods and Their Limitations

Friedland et al. [excel_12] introduce cTNN (Tensor Nuclear Norm based on CP) as the convex relaxation of the tensor CP-rank:


where and represents the vector outer product. However, for a given tensor , minimizing the surrogate objection directly is difficult due to the fact that the CP-rank is usually NP-complete [excel_11, excel_42], which also means we cannot verify the consistency of cTNN’s implicit decomposition with the ground-truth CP-decomposition. Meanwhile, it is hard to measure the cTNN’s tightness relative to the CP-rank since whether cTNN satisfies the continuous analogue of Comon’s conjecture [excel_12] remains unknown. Although Yuan et al. [excel_13] give the sub-gradient of cTNN by leveraging its dual property, the high computational cost makes it difficult to implement.

To reduce the calculation cost, Liu et al. [excel_1_1] define a kind of tensor nuclear norm named SNN (Sum of Nuclear Norm) based on the Tucker decomposition [excel_30]:


where , denotes unfolding the tensor along the -th dimension, and is the nuclear norm of a matrix, i.e., sum of singular values. The convenient calculation algorithm makes SNN widely used [TNNLS_fu2016tensor, TNNLS_LiuGeneralized, excel_1_1, excel_15, excel_17_1]. It is worth to mentioned that, although SNN has a similar representation to matrix case, Paredes et al. [excel_16] point out that SNN is not the tightest convex relaxation of the Tucker-n-rank [excel_31], and is actually an overlap regularization of it. [Latent_2010, Latent_2013, Latent_2014] also propose a new regularizer named Latent Trace Norm to better approximate the tensor rank function. In addition, due to unfolding the tensor directly along each dimension, the information utilization of SNN based model is insufficient.

To avoid information loss in SNN, Kilmer et al. [excel_21] propose a tensor decomposition named t-SVD with a Fourier transform matrix , and Zhang et al. [excel_10] give a definition of the tensor nuclear norm on corresponding to t-SVD, i.e., Tensor Nuclear Norm (TNN):


where denotes the -th frontal slice matrix of tensor , and is the mode- Tucker Product [excel_30]. Benefitting from unique computing method and efficient use of time series features, TNN has attracted extensive attention in recent years [excel_1_2, LuIJCAI2018, TNNLS_yin2018multiview, TNNLS_hu2016twist]. The operation of Fourier transform along the third dimension makes TNN based models have a natural computing advantage for video and other data with strong time continuity along a certain dimension.

However, when considering the smoothness of different data, using a fixed Fourier transform matrix may bring some limitations. In this paper, we define smooth and non-smooth data along a certain dimension as the usual intuitive meaning. For example, a continuous video data is smooth. But if the data tensor is a concatenation of several different scene videos or a random arrangement of all frames, then the data is non-smooth.

Firstly, TNN needs to implement Singular Value Decomposition (SVD) in the complex field , which is slightly slower than that in the real field . Besides, the experiments in related papers [excel_10, LuIJCAI2018, excel_22, Kong2018] are usually based on some special dataset which have smooth change along the third dimension, such as RGB images and short videos. Those non-smooth data may increase the number of non-zero tensor singular values [excel_21, excel_10], weakening the significance of low rank structure. Since tensor multi-rank [excel_10] is actually the rank of each projection matrix on different Fourier basis, the non-smooth change along the third dimension may lead to large singular values appearing on high frequency projection matrix slices.

Meanwhile, we also find that SNN based methods usually perform better than TNN when dealing with non-smooth data. On the one hand, when handling data with strong continuity along a certain dimension such as video, SNN based methods may destroy the continuity between frames and then lead to a worse performance than TNN. On the other hand, when the data features need to extract the principal components along a certain dimension such as CIFAR-10 with disordered arrangement, SNN based methods can usually achieve better recovery results than TNN. In order to overcome the respective limitations of these two methods, we think that the respective advantages can be unified to get a new method.

I-B Unified Analysis of SNN and TNN

To study the similarities and differences between SNN and TNN, we need bridge the gap between the different calculations of them. Kernfeld et al. [kernfeld2015tensor] generalize the t-product by introducing a new operator named cosine transform product with an arbitrary invertible linear transform  (or arbitrary invertible matrix ). For a given and an invertible matrix , they have and . Then the Discrete Fourier Transform operator along the third dimension in t-product [excel_21] can be regarded as a special case of with the Fourier transform matrix , which means . Meanwhile, for the unfolding matrix in the definition (3), we have , where denotes the right singular matrix of , and .

From the above two aspects, we point out that: (1) each singular value of TNN is actually the singular value of each frontal slice, which is obtained by the mode- Tucker Product [excel_30] of the original tensor and the Fourier transform matrix along the third-dimension (); (2) each singular value of SNN is the Frobenius norm of each frontal slice, which is obtained by the Tucker Product [excel_30] of the original tensor and the singular matrix along each dimension ().

It can be seen that different multipliers (e.g., and ) lead to different definitions of rank, which may lead to different experimental results. On the one hand, the element-value of SNN is the Frobenius norm of each frontal slice of while the element-value of TNN is the singular value of each frontal slice of . That is to say, TNN considers the rank of the projected slices, and has a more detailed definition of low rank structure than SNN. This also improves the performance guarantee of TNN based methods [LuIJCAI2018]. On the other hand, by comparing and , the multiplier in SNN is data-dependent while in TNN is a fixed Discrete Fourier transform matrix. From our later experiments it can be seen that, when handling non-smooth data such as disordered images of the same object, may lead to extractions of too much component information with different frequencies. But can be regarded as a principal component extraction matrix, so as to better extract features.

Naturally, we consider that combining a more detailed definition of low rank structure (similar to TNN) and a better set of data-dependent projection bases in  (similar to SNN) can effectively overcome the limitations of existing TNN and SNN based methods.

I-C Motivation

In the tensor completion task, we found that when dealing with some non-smooth data, Tensor Nuclear Norm (TNN) based methods usually perform worse than the cases with smooth data. At the same time, Sum of Nuclear Norm (SNN) based methods are almost unaffected by the smoothness of data. However, for most dataset, TNN based methods are much better than SNN based methods.

Our motivation is that we can combine the advantages of these two norm based methods, and make new methods more robust to data smoothness and maintain good performance similar to TNN. We consider that (1): the robustness along the third dimension of SNN based methods comes from its mode- regularizer, which assumes that the dimension of ’s column subspace (projected by its right singular matrix ) is quite small. But the right singular matrix is instead by a fixed Fourier matrix in TNN. (2): the excellent performance of TNN based methods comes from its definition of tensor singular value. In the mode- regularizer of SNN, the singular value of mode- unfolding matrix is too simple to approximate the true subspace.

In summary, we combine the data-dependent orthogonal matrix of SNN and the superior tensor singular value definition of TNN, and then propose our tensor -norm based method. Moreover, We also give the reasonable explanation for the proposed model and the selection method of . It should be pointed out that our proposed method seems to be similar to SNN, but our definition of singular value is quite different from it, which makes our method maintain the tensor internal structure better.

I-D Contributions

In summary, our main contributions include:

  • We generalize the Fourier transform matrix to the real orthonormal matrix , and then propose a new definition of tensor rank, named tensor Q-rank and denoted as . The low rank tensor recovery problem can be rewritten as:


    We further provide an envelope of the tensor Q-rank within an appropriate region, named tensor Q-nuclear norm, as a regularizer.

  • To get a more significant low rank structure w.r.t. , we further introduce an explainable selection method of and make to be a learnable variable w.r.t. the data. Figure 1 shows an example with background changing video data that, under our proposed selection of , our low rank structure is more significant than that of TNN.

  • Finally, we apply the proposed regularizer of adaptive to the tensor completion problem. As for the special case that is fixed, we give a complete proof of the convergence for corresponding completion algorithm and the performance guarantee for exact completion.

Fig. 1: Compare the two different low rank structures between our proposed regularization and TNN regularization in non-smooth video data. Left: the first sorted singular values by TNN regularization (divided by ) and ours. Right: the short video with background changes.

Ii Notations and Preliminaries

We introduce some notations and necessary definitions which will be used later. Tensors are represented by uppercase curlycue letters, e.g., . Matrices are represented by boldface uppercase letters, e.g., . Vectors are represented by boldface lowercase letters, e.g., . Scalars are represented by lowercase letters, e.g., . Given a 3-order tensor , we use to represent its -th frontal slice . Its -th entry is represented as . denotes unfolding the tensor along the -th dimension [excel_9]. denotes the -th singular value of matrix . denotes the pseudo-inverse matrix of . Following the notations of [kernfeld2015tensor], we have and . denotes the matrix nuclear norm, while denotes the matrix norm. Due to limited space, for the definitions of  [excel_1_2], Tucker product [excel_30], t-product [excel_21], and so on, please refer to our Supplementary Materials.

Iii Main Result

Iii-a Tensor Q-rank

For a given tensor and a Fourier transform matrix , if we use to represent the -th frontal slice of tensor , then the tensor multi-rank and Tensor Nuclear Norm (TNN) of can be formulated by mode- Tucker Product as follows:


Kernfeld et al. [kernfeld2015tensor] generalize the t-product by introducing a new operator named cosine transform product with an arbitrary invertible linear transform  (or arbitrary invertible matrix ). For an invertible matrix , they have and .

Here, we further extend the invertible multiplier to a general real orthogonal matrix. It is worth mentioning that the orthogonal matrix has two good properties: one is invertible, the other is to keep Frobenius norm invariant, i.e., . Then we propose a new definition of tensor rank named Tensor Q-rank.

Definition 1.

(Tensor Q-rank) Given a tensor and a fixed real orthogonal matrix , the tensor Q-rank of is defined as the following:


where .

Generally in the low-rank recovery models, it is quite difficult to minimize the rank function directly. Therefore, some auxiliary definitions of tensor singular value and tensor norm are needed to relax the rank function.

Iii-B Definition of Tensor Singular Value and Tensor Norm

Considering the superior performance of TNN in many existing tasks, we can use the similar singular value definition of TNN. Given a tensor and a fixed orthogonal matrix such that , then the -singular value of is defined as , where , , is the -the frontal slice of , and denotes the matrix singular value. When an orthogonal matrix is fixed, the corresponding tensor spectral norm and tensor nuclear norm of can also be given.

Definition 2.

(Tensor Q-spectral norm) Given a tensor and a fixed real orthogonal matrix , the tensor Q-spectral norm of is defined as the following:

Definition 3.

(Tensor Q-nuclear norm) Given a tensor and a fixed real orthogonal matrix , the tensor Q-nuclear norm of is defined as the following:


Moreover, with a fixed , the convexity, duality, and envelope properties are all preserved.

Property 1.

(Convexity) Tensor Q-nuclear norm and Tensor Q-spectral norm are both convex.

Property 2.

(Duality) Tensor Q-nuclear norm is the dual norm of Tensor Q-spectral norm, and vice versa.

Property 3.

(Convex Envelope) Tensor Q-nuclear norm is the tightest convex envelope of the Tensor Q-rank within the unit ball of the Tensor Q-spectral norm.

These three properties are very important in the low rank recovery theory. Property 3 implies that we can use the tensor Q-nuclear norm as a rank surrogate. That is to say, when the orthogonal matrix is given, we can replace the low tensor Q-rank model (5) with model (11) to recover the original tensor:


Iii-C An Explainable Selection Method of

In practical problems, the selection of often has a tremendous impact on the performance of the model (11). If is an identity matrix , it is equivalent to solving each frontal slice separately by the low rank matrix methods [excel_7_1]. Or if is a Fourier transform matrix , it is equivalent to the TNN-based methods [excel_10, excel_1_2]. Through the analysis of our previous section, for a given data , those that make lower usually make the recovery problem (11) easier (We will discuss this conclusion in Theorem 4).

Considering the data dependence of in SNN, we hope to find a data-dependent instead of in TNN, which can reduce the number of non-zero singular values of each projected slices. In Eq. (11), let and our analyses are as following. (1): Considering the singular value definition of TNN, the matrix inequality  (spectral norm and Frobenius norm, respectively) implies that, the closer is to zero, the more singular values close to zero, which will lead to a more significant low rank structure (w.r.t. ). (2): If we make an orthogonal matrix, then it is also invertible. Hence the sum of the squares of each projected slice’s Frobenius norm is a constant, i.e., .

Combined with above two points, it is easy to see that we need to make more close to while the sum of squares is a constant . From the perspective of variable distribution, we need to choose a data-dependent to maximize the distribution variance of , where and is the -th frontal slice matrix of . For better explanations, we give the following two Lemmas, and the optimal condition of Lemma 1 illustrate our hypothesis that there should be more close to .

Lemma 1.

Given non-negative variables such that , then maximizing the variance is equivalent to minimizing the sum . Moreover, the optimal condition is that there is only one non-zero variable in .

Lemma 2.

Given a fixed matrix with , and its skinny Singular Value Decomposition (SVD) , then the right singular matrix optimize the following:


In other words, . As for the case that , the right singular matrix is also the optimal solution.

The proofs of these two lemmas please refer to the Appendix. Notice that minimizing in Lemma 1 can be seen as a linear hyperplane optimization problem defined in the first quadrant spherical surface: . The intersection of sphere and each axis is distributed on the optimal hyperplane, which corresponds to only one non-zero coordinate. In fact, this condition is consistent with the fact that we want more variables close to .

By using Lemma 1, maximizing the variance of is equivalent to minimizing the sum . Then we have , where and denote the mode- unfolding matrices [excel_30]. Lemma 2 shows that, to minimize the norm w.r.t. , we can choose as the right singular matrix of .

Through the previous analysis, we make the selection of data-dependent, and then use a bilevel model to compute an adaptive . The following definition shows the details.

Definition 4.

(Low Tensor Q-rank model with adaptive ) By setting the adaptive module as a low-level sub-problem, the low tensor Q-rank model (5) is transformed into the following:


And the corresponding surrogate model (11) is also replaced by the following:


In Eqs. (13) and (14), denotes the mode-3 unfolding matrix of tensor . In fact, Lemma 2 implies that we can make , where is the right singular matrix of .

Within this framework, the orthogonal matrix is related to tensor . And the constraint could make as low as possible. As we analyzed in Introduction, there should be more “small” frontal slices of , whose Frobenius norms are close to .

Proposition 1.

Through the above deduction, if we let be the operator to obtain the right singular matrix , where , then the models (13) and (14) can be abbreviated as follows:

Remark 1.

In fact, can be chosen as any value that satisfies the condition , as long as is the first columns of the right singular matrix and is pseudo-invertible to make hold.

Remark 2.

In this paper, we only shows an explainable definition of . As we mentioned, different may lead to different models. Our later experiments will show the PSNR results of the models with a random and an Oracle .

This selection method of guarantees the low tensor Q-rank structure of data with high probability. In addition to our previous analysis based on Lemma 1 and 2, we can also use the Random Matrix Theory to explain the rationality of this selection method.

Let , and denotes the frontal slices of , then the -th singular value of satisfies . According to the distribution property of singular values [RMTSVD, anderson2010introduction], for general data matrix  (or random matrix), the first few are much larger than the others with high probability, and most of the rest singular values are close to . Therefore, the inequality can guarantee that most singular values are also close to with high probability.

Now the question is whether the function in Eq. (16) is still an envelope of the rank function in Eq. (15) within an appropriate region. The following theorem shows that even if is no longer a convex function in the bilevel framework (16) since is dependent on , we can still use it as a surrogate for a lower bound of in Eq. (15).

Theorem 1.

Given a column orthonormal matrix , where , we use , , and to abbreviate the corresponding concepts as follows:


Then within the region of , the inequality holds. Moreover, for every fixed , let denote the space . Then Property 3 indicates that is still the tightest convex envelope of in .

Remark 3.

For any column orthonormal matrix , the corresponding conclusion also holds as long as . That is to say, holds within the region .

Theorem 1 shows that though could be non-convex, its function value is always below . Therefore, model (16) can be regarded as a reasonable low rank tensor recovery model. Notice that it is actually a bilevel optimization problem.

Iv Application to Tensor Completion

Iv-a Model

In the -order tensor completion task, is an index set consisting of the indices which can be observed, and the operator in Eqs. (15) and (16) is replaced by an orthogonal projection operator , where if and otherwise. The observed tensor satisfies . Then the tensor completion model based on our assumption is given by:


where is the tensor that has low rank structure, and is an column orthonormal matrix with . To solve the model by ADMM based method [LuADMMPAMI], we introduce an intermediate tensor to separate from . Let , then is translated to , where is an all-zero tensor. Then we get the following model:


Iv-B Optimization

Since is dependent on , it is difficult to solve the model (21) w.r.t. directly. Here we adopt the idea of alternating minimization to solve and alternately. We separate the sub-problem of solving as a sub-step in every -iteration, and then update with a fixed by the ADMM method [LuADMMPAMI, LuIJCAI2018]. The partial augmented Lagrangian function of Eq. (21) is


where is the dual variable and is the penalty parameter. Then we can update each component , , , and alternately. Algorithms 1 and 2 show the details about the optimization methods to Eq. (21). Note that there is one operator in the sub-step of updating as follows:


where is a given column orthonormal matrix and is the tensor Q-nuclear norm of which is defined in Eq. (10). Algorithm 2 shows the details of solving this operator.

In Eq. (24), with the convergence of iteration, there will be . Naturally, holds, which corresponds to the constraints of the original problem (20). As for the case that is a fixed orthogonal matrix, the corresponding optimization algorithm is provided in Supplementary Materials.

Input: Observation samples , , of tensor .
Initialize: . Parameters .
While not converge do

  1. Update   by

  2. Update by

  3. Update by


  4. Update the dual variable by

  5. Update by

  6. Check the convergence condition: , , and .

  7. .

end While
The target tensor .

Algorithm 1 Solving the problem (21) by ADMM.

Input: Tensor , column orthonormal matrix .

  1. .

  2. for :


    , where .

  3. end for

Output: Tensor .

Algorithm 2 Solving the proximal operator in Eq. (23) and (25).

Iv-C Convergence and Performance Analyses

Convergence: For the models (20) or (21), it is hard to analyze the convergence of the corresponding optimization method directly. The constraint on is non-linear and the objective function is essentially non-convex, which increase the difficulty of analysis. However, the conclusions of [LuADMMPAMI, excel_26, excel_46, Lin2011NIPS, OPTALG] guarantee the convergence to some extent. In practical applications, we can fix in every iterations and then use Theorem 3 to obtain a conditional optimal solution under with proper and .

Theorem 2.

With a proper and in Algorithm 1, the sequence generated by Algorithm 1 is convergent, and , .

Theorem 3.

Given a fixed in every iterations, the tensor completion model (21) can be solved effectively by Algorithm 1 with in Eq. (24), where is replaced by . The rigorous convergence guarantees can be obtained directly due to the convexity (See Supplementary Materials).

Complexity: The per-iteration complexity in Algorithm 1 is . One iteration means updating all variables once in order. As for the TNN based model in [excel_1_2], the computational complexity at each iteration is . The size of tensor data have a great influence on the computational complexity. While is much larger than or is large enough, our method will be more efficient than TNN. In addition, thanks to , our method avoids the multiplication and SVD in , which slightly reduces the computational cost. Our running time experiment in Figure 3 also validate the efficiency of our Algorithm in some cases.

Performance: With a fixed , the exact tensor completion guarantee for model (11) is shown in Theorem 4. Our synthetic experiments also verify the conclusion to some extent.

Theorem 4.

Given a fixed orthogonal matrix and , assume that tensor  () has a low tensor Q-rank structure and . If , then is the unique solution to Eq.(11) with high probability, where is replaced by , and is the corresponding incoherence parameter (See Supplementary Materials).

Through the proof of [LuIJCAI2018], the sampling rate should be proportional to . (The definition of projection operators and can be found in [excel_1_2, LuIJCAI2018] or in Supplementary materials, where is the singular space of the ground-truth.) Proposition 15 in [LuIJCAI2018] also implies that for any , we need to have . These two conditions indicate that once the spatial dimension of is large, a larger sampling rate is needed. And Figure 3 in [LuIJCAI2018] verifies the rationality of this deduction by experiment.

In fact, the smoothness of data along the third dimension has a great influence on the Degree of Freedom (DoF) of space . Non-smooth change along the third dimension is likely to increase the spatial dimension of under the Fourier basis vectors, which makes the TNN based methods ineffective. Our experiments on CIFAR-10 (Table I) confirm this conclusion.

As for the model (20) with adaptive , extracting principal components along the third dimension makes the spatial dimension of corresponding as small as possible, where is the singular space of the ground-truth under . In other words, for more complex data with non-smoothness along the third dimension, the adaptive may reduce the dimension of and make smaller than , leading to a lower bound for the sampling rate . Our experimental results in Figure 2 and Tables I and II also illustrate that our proposed method performs better than TNN based method in the case of complex data with lower sampling rates.

V Experiments

In this section, we conduct numerical experiments to evaluate our proposed model (20). Assume that the observed corrupted tensor is , and the true tensor is . We represent the recovered tensor (output of the algorithms) as , and use Peak Signal-to-Noise Ratio (PSNR) to measure the reconstruction error:


V-a Synthetic Experiments

Fig. 2: The numbers plotted on the above figure are the average PSNRs within 10 random trials. The gray scale reflects the quality of completion results of four different models (TQN, TNN, LRTC, LRMC). Here and the while area represents a maximum PSNR of .

In this part we compare our proposed method (named TQN model) with other mainstream algorithms, including TNN [excel_10, LuIJCAI2018], SiLRTC [excel_1_1], and LRMC [excel_7_1].

We examine the completion task with varying tensor Q-rank of tensor and varying sampling rate . Firstly, we generate a random tensor , whose entries are independently sampled from an distribution. Then we choose in and in , where the column orthonormal matrix satisfies . We let be the true tensor. After that, we create the index set by using a Bernoulli model to randomly sample a subset from . The sampling rate is . For each pair of , we simulate times with different random seeds and take the average as the final result. As for the parameters of TQN model in Algorithm 1, we set , , , and . For the LRTC model, we set .

As shown in the upper left corner regions of TQN model in Figure 2, Algorithm 1 can effectively solve our proposed model (20). The larger tensor Q-rank it is, the larger the sampling rate is needed, which is consistent with our Performance Analysis in Theorem 4.
By comparing the results of four methods, we can find that TNN and LRMC have very poor robustness to the data with non-smooth change. And the results of TQN, LRTC, and TNN demonstrate our assumptions (Motivation), which may imply that TQN combines the advantages of TNN and SNN.

V-B Real-World Datasets

In this part we compare our proposed method with TNN, SiLRTC, LRMC, Latent Trace Norm [Latent_2013], and t-Schatten- norm [Kong2018] with . For other improved or matrix factorization-based algorithms, such as [excel_22, excel_38], our model can also be extended in ways similar to theirs. For the sake of fairness, we only compare our method with the basic mainstream framework. We validate our algorithm on three datasets: (1) CIFAR-10111 http://www.cs.toronto.edu/~kriz/cifar.html.; (2) COIL-20222 http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php.; (3) HMDB51333http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/.. We set , , , , and in our method TQN, and set in tSp. As for the others, we use the default settings of Lu et al.444https://github.com/canyilu/LibADMM.

Sampling Rate 0.1 0.2 0.3 0.4 0.5 0.6 TQN with Random  (Ours) 10.86 15.47 18.09 20.20 22.30 24.49 TQN with Oracle  (Ours) 25.39 30.85 39.43 109.52 200 200 TQN with Adaptive  (Ours) 18.83 21.10 22.89 24.56 26.26 28.07 TNN [LuIJCAI2018] 9.84 12.73 15.68 18.71 21.60 24.26 SiLRTC [excel_1_1] 16.87 20.04 21.99 23.80 25.62 27.57 LRMC [excel_7_1] 11.20 15.81 18.26 20.41 22.51 24.72 Sampling Rate 0.1 0.2 0.3 0.4 0.5 0.6 TQN with Random  (Ours) 10.84 15.45 18.06 20.19 22.29 24.48 TQN with Oracle  (Ours) 45.75 200 200 200 200 200 TQN with Adaptive  (Ours) 19.06 21.43 23.27 24.97 26.65 28.42 TNN [LuIJCAI2018] 8.18 10.10 12.19 14.63 17.59 21.20 SiLRTC [excel_1_1] 14.02 19.65 22.44 24.38 26.21 28.12 LRMC [excel_7_1] 11.15 15.79 18.25 20.40 22.51 24.72
TABLE I: Comparisons of PSNR results on CIFAR images with different sampling rates. Up: experiments on the case . Down: experiments on the case .

V-B1 Influences of

Corresponding to Remark 1, we use a Random orthogonal matrix and an Oracle matrix (the right singular matrix of the ground-truth unfolding matrix) to test the influence of . The results of three TQN based models in Tables I and II show that play an important role in tensor recovery. Comparing with Random case, our Algorithm 1 is effective for searching a better . Table I also shows that a proper may make recover the ground-truth more easily. For example, with sampling rate on images, an Oracle matrix can lead to an “exact” recovery.

V-B2 Cifar-10

We consider the worst case for TNN based methods that there is almost no smoothness along the third dimension of the data. We randomly selected 3000 and 10000 images from one batch of CIFAR-10 [CIFAR] as our true tensors and , respectively. Then we solve the model (21) with our proposed Algorithm 1. The results are shown in Table I. In the latter case, holds.

Fig. 3: Running time comparisons of different methods, where and sampling rate .

Table I verifies our hypothesis that TNN regularization performs badly on data with non-smooth change along the third dimension. Our method and SiLRTC are obviously better than the other two methods in the case of low sampling rate. Moreover, by comparing the two groups of experiments, we can see that TQN and SiLRTC perform better in . This may be due to that increasing the data volume may make the principal components more significant. The first few principal components of and are shown in Supplementary Materials for further explanation.

The above analyses confirm that our proposed regularization are data-dependent just like SNN regularization. And based on better definition of tensor singular value, we can make better use of internal structure than SNN regularization.

V-B3 Running time on CIFAR

As shown in Figure 3, we test the running times of different models. The two figures indicate that, when , our TQN models has the highest computational efficiency in each iteration. For the case , Figure 4 implies that setting can balance computational efficiency and recovery accuracy.

V-B4 COIL-20 and Short Video from HMDB51

COIL-20 [COIL] contains 1440 images of 20 objects which are taken from different angles. The size of each image is processed as , which means . The upper part of Table II shows the results of the numerical experiments. We select a background-changing video from HMDB51 [HMDB] for the video inpainting task, where . Figure 1 shows some frames of this video. The lower part of Table II shows the results.

We can see that “Latent trace norm” is much better than TNN in COIL, which validates our assumption that tensor trace norm is much more robust than TNN in processing non-smooth data.

Overall, both TQN and TNN perform better than the other methods on these two datasets due to the higher smoothness along the third dimension. This is mainly because the definitions of tensor singular value in TQN and TNN make better use of the tensor internal structure, and this is also the main difference between TQN and SNN. Meanwhile, our method is obviously better than the others at all sampling rates, which reflects the superiority of our data dependent . All visual comparisons are provided in Supplementary Materials.

Sampling Rate 0.1 0.2 0.3 0.4 0.5 0.6 TQN with Random  (Ours) 16.05 20.07 23.02 25.57 27.95 30.34 TQN with Oracle  (Ours) 22.97 25.32 27.18 28.90 30.68 32.51 TQN with Adaptive  (Ours) 22.79 25.34 27.29 29.08 30.86 32.74 TNN [LuIJCAI2018] 19.20 22.08 24.45 26.61 28.72 30.91 SiLRTC [excel_1_1] 18.87 21.80 23.89 25.67 27.37 29.14 Latent Trace Norm [Latent_2013] 19.09 22.98 25.75 28.11 30.40 32.42 LRMC [excel_7_1] 16.32 20.11 22.91 25.34 27.65 29.98 Sampling Rate 0.1 0.2 0.3 0.4 0.5 0.6 TQN with Random  (Ours) 18.85 22.76 25.87 28.73 31.55 34.48 TQN with Oracle  (Ours) 23.44 27.61 31.37 35.11 38.92 42.74 TQN with Adaptive  (Ours) 23.97 28.09 31.76 35.33 39.06 42.87 TNN [LuIJCAI2018] 22.40 25.58 28.28 30.88 33.55 36.41 tSp (p=2/3) [Kong2018] 22.41 25.32 27.67 31.26 34.23 36.98 SiLRTC [excel_1_1] 18.42 22.33 25.76 29.15 32.59 36.15 Latent Trace Norm [Latent_2013] 18.94 22.72 25.65 28.26 30.79 33.48 LRMC [excel_7_1] 18.87 22.79 25.94 28.82 31.65 34.61
TABLE II: Comparisons of PSNR results on COIL images and video inpainting with different sampling rates. Up: the COIL dataset with . Down: a short video from HMDB51 with .

V-B5 Influence of r in

Remarks 12 and 3 imply that denotes the apriori assumption of the subspace dimension of the ground-truth. It means that the dimension of the frontal slice subspace of the true tensor  (also as the column subspace of mode- unfolding matrix ) is no more than . Figure 4 illustrates the relations among running times, different , and the singular values of . We project the solution  (in Eq. (25)) onto the subspace of , which means .

As shown in the conduct of Figure 4, the column subspace of is more than 360. If , the algorithm will converge to a bad point which only has an -dimensional subspace. Therefore, in our experiments, we usually set to make sure that is greater than the true tensor’s subspace dimension. This apriori assumption is commonly used in factorization-based algorithms.

The running time decreases with the decrease of . Although needs more time to converge than TNN, it obtains a better recovery. And a smaller does speed up the calculation but harms the accuracy.

Fig. 4: The relations among running times, different , and the singular values of on COIL, where .

Vi Conclusions

We analyze the advantages and limitations of SNN and TNN based methods, and then propose a new definition of tensor rank named tensor Q-rank. To get a more significant low rank structure w.r.t. , we further introduce an explainable selection method of and make to be a learnable variable w.r.t. the data. We also provide an envelope of our rank function and apply it to the tensor completion problem. We analyze why our method may perform better than TNN based methods in non-smooth data (along the third dimension) with low sampling rates, and conduct experiments to verify our conclusions.


Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description