Affine matrix rank minimization problem via p-thresholding function

Affine matrix rank minimization problem via -thresholding function

Angang Cui Corresponding author
Jigen Peng 22email: jgpengxjtu@126.com
1 School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, 710049, China
2 School of Science, Xi’an Polytechnic University, Xi’an, 710048, China
   Jigen Peng Corresponding author
Jigen Peng 22email: jgpengxjtu@126.com
1 School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, 710049, China
2 School of Science, Xi’an Polytechnic University, Xi’an, 710048, China
   Haiyang Li Corresponding author
Jigen Peng 22email: jgpengxjtu@126.com
1 School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, 710049, China
2 School of Science, Xi’an Polytechnic University, Xi’an, 710048, China
   Qian Zhang Corresponding author
Jigen Peng 22email: jgpengxjtu@126.com
1 School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, 710049, China
2 School of Science, Xi’an Polytechnic University, Xi’an, 710048, China
Received: date / Accepted: date
Abstract

To pursuit a much more efficient algorithm, the -thresholding function is taken to solve affine matrix rank minimization problem. Numerical experiments on image inpainting problems show that our algorithm performs powerful in finding a low-rank matrix comparing with some state-of-art methods.

Keywords:
Affine matrix rank minimization problem-thresholding functionGeneralized thresholding algorithm
Msc:
65K1090C2690C59

1 Introduction

In this paper, we study affine matrix rank minimization problem

(1)

where the linear map and the vector are given. ARMP has attracted much attention in many application such as collaborative filtering in recommender systems, minimum order system and low-dimensional Euclidean embedding in control theory and so on (see e.g., candes1 (),jannach2 (),faze3 (),faze4 (),ji5 ()). It is a challenging non-convex optimization problem and is known as NP-hard recht6 ().

Nuclear-norm is the most popular alternative (see e.g.,candes1 (),faze4 (),recht6 (),candes7 (),faze8 (),candes9 ()), and the minimization problem has the following form

(2)

for the constrained problem and

(3)

for the regularization problem, where is the regularization parameter, is nuclear-norm of matrix , and presents the -th largest singular value of matrix arranged in descending order.

As compact convex relaxation, NuARMP may possesses many theoretical and algorithmic advantages (see e.g., liu10 (),cai11 (),toh12 (),ma13 ()). However, it may be suboptimal for recovering a real low-rank matrix and yield a matrix with much higher rank and need more observations to recover a real low-rank matrix (see e.g., candes1 (),cai11 ()). Moreover, RNuARMP tends to lead to biased estimation by shrinking all the singular values toward to zero simultaneously, and sometimes results in over-penalization as the -norm in compressed sensing daubechies14 ().

This brings our attention to the non-convex functions. We substitute rank function by a continuous promoting low-rank non-convex function. Through this transformation, ARMP can be translated into a transformed ARMP (TrARMP) which has the following form

(4)

for the constrained problem and

(5)

for the regularization problem, where the continuous promoting low rank non-convex function is in terms of singular values of matrix , e.g.,

(6)

In li15 (), we take

(7)

where

(8)

is the fraction function.

With the change of parameter , we have

(9)

So, the non-convex function interpolates the rank of matrix

(10)
Figure 1: The behavior of the fraction function for various values of .

By this transformation, the minimization problem (ARMP) can be turned into the following transformed form

(11)

for the constrained problem and

(12)

for the regularization problem.

In li15 (), the iterative singular value thresholding algorithm (ISVTA) is proposed to solve the minimization problem (12). Numerical experiments on completion of low-rank random matrices show that ISVTA performs powerful in finding a low-rank matrix comparing with some state-of-art methods. However, the thresholding function for non-convex fraction function are too complicated to computing, and convergent slowly.

Figure 2: Plot of -thresholding function for a few values of . Smaller values of shrink more values to zero, while shrinking large values less.

In order to pursuit a much more efficient algorithm, the -thresholding function is used to solve the minimization problem (ARMP) for all , and the -thresholding function voro16 () can be defined as

(13)

In sparse information processing, the -thresholding function performs better in numerical examples than some state-of-art methods. When we take , the thresholding function is equivalent to the classical soft thresholding. For values of below 1, the thresholding penalizes small coefficients over a wider range and applies less bias to the larger coefficients, much like the hard thresholding but without discontinuities (see e.g., voro16 ()).

Moreover, in char17 (), the authors demonstrated that the -thresholding function is the proximal mapping of a non-convex penalty function with several desirable properties.

As representation (6), we take

(14)

Then, TrARMP could be transformed into the following minimization problem

(15)

and the iterative -thresholding algorithm is proposed to solve the minimization problem (RFTrARMP) for all .

The rest of this paper is organized as follows. In Section 2, the iterative -thresholding algorithm is proposed to solve the minimization problem (RFTrARMP). In section 3, the convergence of the iterative -thresholding algorithm is established. In Section 4, we demonstrate some numerical experiments on some image inpainting problems. Some conclusion remarks are presented in Section 5.

2 -thresholding function for solving (RFTrARMP)

In this section, based on the -thresholding function, the iterative -thresholding algorithm is proposed to solve the minimization problem (RFTrARMP) for all .

2.1 Iterative -thresholding algorithm for solving (RFTrARMP)

Based on the -thresholding function, we now briefly derive the closed form representation of the optimal solution to the minimization problem (RFTrARMP) for all , which underlies the iterative -thresholding algorithm to be proposed. Before computing the generalized thresholding algorithm, we need some results which plays a key role in our later analysis.

Definition 1

The -thresholding operator is a diagonally nonlinear analytically expressive operator, and can be specified by

(16)

where the -thresholding function is defined in (13) for all .

Theorem 2.1

char17 () Suppose is continuous, satisfies for , is strictly increasing on , and . Then the threshold operator is the proximal mapping of the penalty function where is even, strictly increasing and continuous on , differentiable on , and non-differentiable at 0 if and only if (in which case ). If is non-increasing on , then is concave on and satisfies the triangle inequality.

Theorem 2.1 show that the -thresholding function is the proximal mapping of a non-convex penalty function with several desirable properties.

Lemma 1

(von Neumann’s trace inequality) For any matrices ,

where

and

are the singular value of matrices and respectively. The equality holds if and only if there exist unitary matrices and that such and as the singular value decompositions of the matrices and simultaneously.

Define a function of the matrix as

(17)

and

(18)
Theorem 2.2

Let be the singular value decomposition of matrix . Then the optimal matrix can be expressed as

(19)
Proof

Since are the singular values of matrix , the minimization problem

(20)

can be rewritten as

By Lemma 1, we have

Notice that above equality holds when admits the singular value decomposition

where and are the left and right orthonormal matrices in the singular value decomposition of matrix .

In this case, the optimization problem (20) reduces to

(21)

which is consistent with Theorem (2.1). This completes the proof.

Nextly, we will show that the optimal solution to minimization problem (RFTrARMP) can be expressed as -thresholding function.

For any fixed , and , let

(22)

and its surrogate function

(23)

Clearly, .

Theorem 2.3

For any fixed , and matrix , then is equivalent to

where .

Proof

In accordance with the definition, can be rewritten as

which implies that for any fixed , and matrix is equivalent to

This completes the proof.

Combing Theorem 2.1, Theorem 2.3 and the proof of Theorem 2.2, we can immediately conclude the corollary as following:

Corollary 1

Let matrix be the optimal solution of , it can be expressed as

(24)

where is the singular value decomposition of matrix , and and are the corresponding left and right orthonormal matrices.

Moreover, if we take the parameter properly, we have

Theorem 2.4

For any fixed and . If is the optimal solution of , then is also the optimal solution of , that is

for any .

Proof

By definition of , we have

where the first inequality holds by the fact that

This completes the proof.

With the representation (24), the iterative -thresholding algorithm for minimization problem (RFTrARMP) can be naturally defined as

(25)

where .

  Input: Given , , .
  while not converged do
         
         
  end while
  Output:
Algorithm 1 : Iterative -thresholding algorithm for solving (RFTrARMP)

2.2 The choice of regularization parameter

It is well known that the quantity of the solution of a regularization problem depends seriously on the setting of the regularization parameter , and the selection of the proper regularization parameters is a very hard problem. In this paper, the cross-validation method is accepted to choose the regularization parameter .

To make it clear, we suppose that the matrix of rank is the optimal solution to the minimization problem (RFTrARMP), and the singular values of matrix are denoted as

By equality (15), the following inequalities hold

which implies

(26)

In practice, we approximate by in (26), and a choice of is

(27)

in applications.

When doing so, the ISVTA will be adaptive and free from the choice of regularization parameter .

3 Convergence of iterative -thresholding algorithm

In this section, the convergence of iterative -thresholding algorithm is established under some certain conditions.

Theorem 3.1

Let be the sequence generated by Algorithm 1 with the step size satisfying . Then the sequence is decreasing.

Proof

By the proof of Theorem 2.4, we have

(28)

Moreover, according to the definition of , we have

(29)

Since , we can get that

(30)

That is, the sequence is a minimization sequence of function , and

for all . This completes the proof.

Theorem 3.2

Let be the sequence generated by Algorithm 1 and . Then the sequence is asymptotically regular, i.e.,

Proof

Let . Then and

(31)

By (30), we have

(32)

Combing (31) and (32), we get

Thus, the series is convergent, which implies that

This completes the proof.

Theorem 3.3

Let be the sequence generated by Algorithm 1 and . Then the sequence converges to a stationary point of the iteration (25).

Proof

Denote

and let

Then

and by (24), we have

Assume that is a limit point of , then there exists a subsequence of , which is denoted as such that as . Since the iterative scheme

we have

which implies that

(33)

By (33), it follows that

Since , we get

Combining the following fact that

we have

This implies that the limit point of the sequence satisfies the equation

This completes the proof.

4 Numerical experiments

In the section, we carry out a series of simulations to demonstrate the performance of iterative -thresholding algorithm on image inpainting problems. We first present numerical results of iterative -thresholding algorithm for image inpainting problems, and then compare it with some other methods (singular value thresholding algorithm (SVTA) cai18 () and iterative singular value thresholding algorithmin an19 ()).

We denote the following quantities and they help to quantify the difficulty of the low rank matrix recovery problems

where is the cardinality of observation set whose entries is sampled randomly.

is the ratio between the number of sampled entries and the ’true dimensionality’ of a matrix of rank , and it is a good quantity as the information oversampling ratio.

The stopping criterion is usually as following

where and are numerical results from two continuous iterative steps and is a given small number. In addition, we measure the accuracy of the generated solution of our algorithms by the relative error () defined as following

In all of the experiments, we set and .

Figure 3: Original Intracranial venous image (IVI) and its approximation with rank , MRI.
Figure 4: Low rank image and its corresponding images under partial observation with , respectively.

   

Table 1: The comparison of iterative -thresholding algorithm with different for IVI, , .

   

Table 2: The comparison of iterative -thresholding algorithm with different for IVI, , .

Table 1, 2 report the numerical results of iterative -thresholding algorithm for image inpainting problems with respectively. The numerical results show that is the best strategy for image inpainting problems.

SR=0.40
Image ISVTA SVTA
(Name, rank, FR) RE Time RE Time RE Time
(IVI, 30, 2.8323) 3.84e-05 120.180 6.74e-05 241.822 1.23e-01 42.456
Table 3: The comparison of iterative -thresholding algorithm, IVSTA and VSTA for image inpainting problems with .
SR=0.30
Image ISVTA SVTA
(Name, rank, FR) RE Time RE Time RE Time
(IVI, 30, 2.1242) 6.29e-05 435.706 1.57e-04 454.403 2.85e-01 35.664
Table 4: The comparison of iterative -thresholding algorithm, IVSTA and VSTA for image inpainting problems with .

The numerical results of iterative -thresholding algorithm, ISVTA and SVTA compared in Table 3, 4 under the same circumstances show that the iterative -thresholding algorithm performs much better than ISTA and SVTA on image inpainting problems for .

5 Conclusions

In the paper, the -thresholding function is taken to solve affine matrix rank minimization problem. Numerical experiments on image inpainting problems show that our algorithm performs powerful in finding a low-rank matrix comparing with other methods.

Acknowledgements.
The work was supported by the National Natural Science Foundations of China (11131006, 11271297) and the Science Foundations of Shaanxi Province of China (2015JM1012).

References

  • (1) E. J. Candès, B. Recht, Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9, 717-772 (2009)
  • (2) D. Jannach, M. Zanker, A. Felfernig and G. Friedrich, Recommender Systerm: An Introduction. Cambridge university press, New York (2012)
  • (3) M. Fazel, H. Hindi and S. Boyd, A rank minimization heuristic with application to minimum order system approximation. In proceedings of American Control Conference, Arlington, VA, 6, 4734-4739 (2001)
  • (4) M. Fazel, H. Hindi and S. Boyd, Log-det heuristic for matrix minimization with applications to Hankel and Euclidean distance matrices. In Proceedings of American Control Conference, Denever, Colorado, 3, 2156-2162 (2003)
  • (5) S. Ji, K. F. Sze and Z. Zhou, Beyond Convex Relaxation: A polynomial-time nonconvex optimization approach to network localization. INFOCOM, 2013 Proceedings IEEE, 12, 2499-2507 (2013)
  • (6) B. Recht, M. Fazel and P. A. Parrilo, Guaranteed minimum-rank solution of linear matrix equations via nuclear norm minimization. SIAM Review, 52, 471-501 (2010)
  • (7) E. J. Candès, T. Tao, The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56, 2053-2080 (2010)
  • (8) M. Fazel, Matrix Rank Minimization with Applications. PhD thesis, Stanford University (2002)
  • (9) E. J. Candès, Y. Plan, Matrix completion with noise. Proceedings of the IEEE, 98, 925-936 (2010)
  • (10) Y. Liu, D. Sun and K. C. Toh, An implementable proximal point algorithmic framewprk for nuclear norm minimization. Mathematical Programming, 133, 399-436 (2012)
  • (11) J. Cai, E. J. candès and Z. W. Shen, A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20, 1956-1982 (2010)
  • (12) K. C. Toh, S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization, 6, 615-640 (2012)
  • (13) S. Ma, D. Goldfarb and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization. Mathematical Programming, 128, 321-353 (2011)
  • (14) I. Daubechies, M. Defrise and D. M. Christine, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11), 1413-1457 (2004)
  • (15) H. Li, Q. Zhang, A. Cui and J. Peng. Minimization of fraction function penalty in compressed sensing. arXiv:1705.06048v1 [math.OC] 17 May 2017
  • (16) S. Voronin, R. Chartrand. A new generalized thresholding algorithm for inverse problems with sparsity constraints. IEEE International Conference on Acoustics, Speech and Signal Processing Acoustics, Speech and Signal Processing (ICASSP), 1636-1640, 2013
  • (17) R. Chartrand. Shrinkage mappings and their induced penalty functions. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1026-1029, 2014
  • (18) J. Cai, E. J. candès and Z. W. Shen, A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20, 1956-1982 (2010)
  • (19) A. Cui, J. Peng, H. Li, C. Zhang and Y. Yu. Affine matrix rank minimization problem via non-convex fraction function penalty. arXiv:1611.07777v3 [math.OC] 30 Apr 2017
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
5845
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description