Multiplicative Iteration for Nonnegative Quadratic Programming
Abstract
In many applications, it makes sense to solve the least square problems with nonnegative constraints. In this article, we present a new multiplicative iteration that monotonically decreases the value of the nonnegative quadratic programming (NNQP) objective function. This new algorithm has a simple closed form and is easily implemented on a parallel machine. We prove the global convergence of the new algorithm and apply it to solving image superresolution and color image labelling problems. The experimental results demonstrate the effectiveness and broad applicability of the new algorithm.
X. Xiao and D. ChenMultiplicative Iteration for NNQP
School of Securities and Futures, Southwestern University of Finance and Economics, Sichuan, China, 610041. Email: chendonghui@swufe.edu.cn
1 Introduction
Numerical problems with nonnegativity constraints on solutions are pervasive throughout science, engineering and business. These constraints usually come from physical grounds corresponding to amounts and measurements , such as solutions associated with image restoration and reconstruction [6, 16, 18, 26] and chemical concentrations [3], etc.. Nonnegativity constraints very often arise in least squares problems, i.e. nonnegative least squares (NNLS)
(1) 
The problem can be stated equivalently as the following nonnegative quadratic programing (NNQP),
(2) 
NNLS \eqrefeq:nnls and NNQP \eqrefeq:nnqp have the same unique solution. In this article, we assume is a symmetric positive definite matrix, and vector .
Since the first NNLS algorithm introduced by Lawson and Hanson [15], researchers have developed many different techniques to solve \eqrefeq:nnls and \eqrefeq:nnqp, such as active set methods [3, 5, 15], interior point methods [2], iterative approaches [14] etc.. Among these methods, gradient projection methods are known as the most efficient methods in solving problems with simple constraints [19]. In this paper, we develop a new multiplicative gradient projection algorithm for NNQP problem. Other similar research can be found in the literature [4, 9, 23].
In the paper [9], the authors studied a special NNLS problem with nonnegative matrix and vector , and proposed an algorithm called the image space reconstruction algorithm (ISRA). The corresponding multiplicative iteration is
(3) 
The proof of the convergence property of ISRA can be found in [1, 10, 12, 20]. More recently, Lee and Seung generalized the idea of ISRA to the problem of nonnegative matrix factorization (NMF) [16]. For general matrix and vector which have both negative and positive entries, the authors proposed another multiplicative iteration [23]
In [4], Brand and Chen also introduced a multiplicative iteration
where , , , , “max” is elementwise comparison of two matrices or vectors. Both above algorithms are proved monotonically converging to global minimum of NNQP objective function \eqrefeq:nnqp.
In this paper, we present a new iterative NNLS algorithm along with its convergence analysis. We prove that the quality of the approximation improves monotonically, and the iteration is guaranteed to converge to the global optimal solution. The focus of this paper is theoretical proof of the monotone convergence of the new algorithm. We leave the comparison with other NNLS algorithms for future research. The remainder of this paper is organized as follows. Section 2 presents the new multiplicative NNLS algorithm, we prove the algorithm monotonically decrease the NNQP objective function. In section 3, we discuss two applications of the new algorithm to image processing problems, including image superresolution and color image labelling. Finally, in section 4, we conclude by summarizing the main advantage of our approach..
2 Multiplicative Iteration and Its Convergence Analysis
In this section we derive the multiplicative iteration and discuss its convergence properties. Consider the NNQP problem \eqrefeq:nnqp
where is positive definite, .
2.1 Multiplicative Iteration
The proposed multiplicative iteration for solving \eqrefeq:nnqp is
(4) 
where , constant stablizer guarantees the iteration monotonically convergent. We will discuss how to choose later in this section. If all the entries in and are nonnegative, the multiplicative update \eqrefeq:update reduced to ISRA [9].
Since all the components of the multiplicative factors, i.e. matrices , , , and vectors , , are nonnegative,
given a nonnegative starting initial guess, , all the generated iterations, , are nonnegative. In generating the sequence , the iteration computes the new update by using only the previous vector . It does not need to know all the previous updates, . And the major computational tasks to be performed at each iteration are computations of the matrixvector products, and . These remarkable properties imply that the multiplicative iteration requires little storage and computation for each iteration.
The iteration \eqrefeq:update is a gradient projection method which can be shown by
where the stepsize , and the gradient of the NNQP objection function \eqrefeq:nnqp .
2.2 Fixed Point
The proposed iteration \eqrefeq:update is motivated by the KarushKuhnTucker (KKT) firstorder optimal condition [19]. Consider the Lagrangian function of NNQP objective function \eqrefeq:nnqp defined by
(5) 
with the scalar Lagrangian multiplier , assume is the optimal solution of Lagrangian function \eqrefeq:nnqplag, the KKT conditions are
with denoting the Hadamard product. Above two equalities imply that either th constraint is active , or and when the th constraint is inactive (). Because , equality implies , we obtain , which is equivalent to
which means the th multiplicative factor is constant .Therefore, any optimal solution satisfying the KKT conditions conrresponds to a fixed point of the multiplicative iteration.
2.3 Convergence Analysis
In this section, we prove the proposed multiplicative iteration \eqrefeq:update monotonically decrease the value of the NNQP objective function \eqrefeq:nnqp to its global minimum. This analysis is based on construction of an auxiliary function of . Similar techniques have been used in the papers [16, 23, 24].
For the sake of completeness, we begin our discussion with a brief review of the definition of auxiliary function.
Definition 2.1
Let and be two positive vectors, function is an auxiliary function of if it satisfies the following two properties

;

.
Figure 1 illustrates the relationship between the auxiliary function and the corresponding objective function . In each iteration, the updated is computed by minimizing the auxiliary function. The iteration stops when it reaches a stationary point. The following lemma, which is also presented in [16, 23, 24], proves the iteration by minimizing auxiliary function in each step decreases the value of the objective function .
Lemma 2.2
Let be an auxiliary function of , then is strictly decreasing under the update
if .
By the definition of auxiliary function, if , we have
The middle inequality is because of the assumption .
Deriving a suitable auxiliary function for NNQP objective function \eqrefeq:nnqp is a key step to prove the convergence of our multiplicative iteration. In the following lemma, we prove two positive semidefinite matrices which are used to build our auxiliary function later.
Lemma 2.3
Let be a nonnegative real symmetric matrix without allzero rows and be a vector whose entries are positive. Define the diagonal matrix ,
Then, the matrices, , are positive semidefinite.
Consider the matrices,
where represents the diagonal matrix whose entries on the main diagonal are the entries of vector . Since is a positive vector, is invertible. Hence, are congruent with , , correspondingly. The matrices are positive semidefinite if and only if and are positive semidefinite [13].
Given any nonzero vector ,
Hence, is positive semidefinite. Similarly, we have is positive semidefinite,
Therefore, are positive semidefinite.
Combining two previous lemmas, we construct an auxiliary function for NNQP \eqrefeq:nnqp as follows.
Lemma 2.4 (Auxiliary Function)
Let vectors and represent two positive vectors, define the diagonal matrix, , with diagonal entries
Then the function
(6) 
is an auxiliary function for quadratic model
First of all, it is obvious that which is the second property in Definition 2.1. Next, we have to show the first property, for .
Notice that is the Hesssian matrix of . The Taylor expansion of at is
The difference between and is
for if and only if is positive definite.
Recall that , ,
Because is positive definite for , and by Lemma 2.3, and are positive semidefinite. Thus, is positive definite.
Hence, we obtain for any vectors . Therefore, is an auxiliary of .
In previous proof, we use the fact to prove the diagonal matrix is positive definite. Because matrix is positive semidefinite for vector with all positive entries, we can choose to be any positive number. In our experiments, it is chosen to be . Armed with previous lemmas, we are ready to prove the convergence theorem for our multiplicative iteration \eqrefeq:update.
Theorem 2.5 (Monotone Convergence)
The value of the objective function in \eqrefeq:nnqp is monotonically decreasing under the multiplicative update
It attains the global minimum of at the stationary point of the iteration.
By the auxiliary function definition 2.1 and Lemma 2.4, in \eqrefeq:aux is an auxialiry of function . Lemma 2.2 shows that the objective function is monotonically decreasing under the update
It remains to show that the proposed iteration \eqrefeq:update approaches the minimum of .
By Fermat’s theorem [22], taking the first partial derivative of with respect to , and setting it to , we obtain that
(7) 
Hence,
where we used the facts that .
The decreasing sequence is bounded below . By Monotone Convergence Theorem [22], the sequence converges to the limit . Because is continuous, given any compact domain, there exists such that . Since is the global minimum of , the gradient of at is zero, i.e.
which is equivalent to
which means is a stationary point. Thus, the sequence converges to the global minimum as approaches to the limit point .
3 Numerical Experiments
We now illustrate two applications of the proposed NNLS algorithm in image processing problems.
3.1 Image SuperResolution
Image superresolution (SR) refers to the process of combining a set of low resolution images into a single highresolution image [7, 8]. Each lowresolution image is assumed to be generated from an ideal highresolution image via a displacement , a decimation , and a noise process :
(8) 
We use the bilinear interpolation proposed by Chung, Haber and Nagy [8]
The lowresolution test data set is taken from the MultiDimensional Signal Processing Research Group (MDSP) [11]. Figure a shows 4 of the uncompressed lowresolution text images of size pixels. The reconstructed highresolution image of size pixels is computed by our algorithm is shown in Figure b. Figure a shows 4 of lowresolution EIA images of size pixels. Figure b shows the reconstructed pixels highresolution image computed by the proposed multiplicative iteration. As shown in the figures, the highresolution images are visually much better than the lowresolution images.
3.2 Color Image Labeling
In Markov random fields (MRF)based interactive image segmentation techniques, the user labels a small subset of pixels, and the MRF propagates these labels across the image, typically finding highgradient contours as segmentation boundaries where the labeling changes [21]. These techniques require users to impose hard constraints by indicating certain pixels (seeds) that absolutely have to be part of the labeling . Intuitively, the hard constraints provide clues as to what the user intends to segment. Denote as the by test RGB images, represent a by vector at pixel .
The class set is denoted by . The probabilistic labeling approaches compute a probability measure field for each pixel ,
with the constraints
(9) 
Denoting as the set of neighbors of pixel , the cost function are in the following quadratic form
(10) 
with the constraints \eqrefeq:segcon. is the cost of assigning label to pixel . The first term, , which controls the granularity of the regions, promotes smooth regions. The spatial smoothness is controlled by the positive parameter, , and weight, , which is chosen such that if the neighbouring pixels and are likely to belong to the same class and otherwise. In these experiments, is defined to be the cosine of the angle between two neighbouring pixels,
The cost of labeling at each pixel , , is trained with a Gaussian mixture model [25] using seeds labeled by the user. Given sample mean, , and variance, , for the seeds with labeling , is computed as the Mahalanobis distance [17] between each pixel of the image and the seeds,
The KKT optimality conditions
yield a twostep Algorithm 1.
In the experiments, we implement our NNLS algorithm without explicitly constructing the matrix . Figure 4 shows the results of the labeled images.
4 Summary
In this paper, we presented a new graduent projection NNLS algorithm and its convergence analysis. By expressing the KKT firstorder optimality conditions as a ratio, we obtained a multiplicative iteration that could quickly solves large quadratic programs on lowcost parallel compute devices. The iteration monotonically converges from any positive initial guess. We demonstrated applications to image superresolution and color image labelling. Our algorithm is also applicable to solve other optimization problems involving nonnegativity contraints. Future research includes comparing the performance of this new NNLS algorithm with other existing NNLS algorithms.
Footnotes
 Email: xxiao@swufe.edu.cn
 Thanks to Julianne Chung for providing the Matlab code.
References
 G.E.B. Archer and D.M. Titterington. The iterative image space reconstruction algorithm as an alternative to the EM algorithm for solving positive linear inverse problems. Statistica Sinica, 5:77–96, 1995.
 S. Bellavia, M. Macconi, and B. Morini. An interior point Newtonlike method for nonnegative leastsquares problems with degenerate solution. Numerical Linear Algebra with Applications, 13(10):825–846, 2006.
 M. Van Benthem and M. Keenan. Fast algorithm for the solution of largescale nonnegativityconstrained least squares problems. J. of Chemometrics, 18(10):441–450, 2004.
 M. Brand and D. Chen. Parallel quadratic programming for image processing. In 18th IEEE International Conference on Image Processing (ICIP), pages 2261 –2264, September 2011.
 R. Bro and S. De Jong. A fast nonnegativityconstrained least squares algorithm. J. of Chemometrics, 11(5):393–401, 1997.
 D. Calvetti, G. Landi, L. Reichel, and F. Sgallari. Nonnegativity and iterative methods for illposed problems. Inverse Problems, 20(6):1747, 2004.
 D. Chen. Comparisons of multiframe superresolution algorithms for pure translation motion. Master’s thesis, Wake Forest University, WinstonSalem, NC, August 2008.
 J. Chung, E. Haber, and J. Nagy. Numerical methods for coupled superresolution. Inverse Problem, 22:1261–1272, 2006.
 M. E. DaubeWitherspoon and G. Muehllehner. An iterative image space reconstruction algorthm suitable for volume ECT. Medical Imaging, IEEE Transactions on, 5(2):61 –66, June 1986.
 P. P. B. Eggermont. Multiplicative iterative algorithms for convex programming. Linear Algebra and its Applications, 130:25–42, 1990.
 S. Farsiu, D. Robinson, M. Elad, and P. Milanfar. Fast and robust multiframe superresolution. IEEE Transactions on Image Processing, 13:1327–1344, 2003.
 J. Han, L. Han, M. Neumann, and U. Prasad. On the rate of convergence of the image space reconstruction algorithm. Operators and Matrices, 3(1):41–58, 2009.
 R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press, 1990.
 D. Kim, S. Sra, and I. Dhillon. A new projected quasinewton approach for the nonnegative least squares problem. Technical report, The University of Texas at Austin, 2006.
 C. Lawson and R. Hanson. Solving Least Squares Problems. SIAM, 3rd edition, 1995.
 D. Lee and S. Seung. Algorithms for nonnegative matrix factorization. In Advances in Neural Information Processing Systems 13, pages 556–562. MIT Press, April 2001.
 P. Mahalanobis. On the generalised distance in statistics. In Proceedings National Institute of Science, India, number 2 in 1, pages 49–55, April 1936.
 J. Nagy and Z. Strako. Enforcing nonnegativity in image reconstruction algorithms. In SPIE Conference Series, volume 4121 of SPIE Conference Series, pages 182–190, October 2000.
 J. Nocedal and S. Wright. Numerical Optimization. Springer, New York, 2nd edition, 2006.
 A. De Pierro. On the convergence of the iterative image space reconstruction algorithm for volume ECT. Medical Imaging, IEEE Transactions on, 6(2):174 –175, June 1987.
 M. Rivera, O. Dalmau, and J. Tago. Image segmentation by convex quadratic programming. In 19th International Conference on Pattern Recognition, 2008, pages 1–5, 2008.
 W. Rudin. Principles of mathematical analysis. McGrawHill Book Co., New York, third edition, 1976. International Series in Pure and Applied Mathematics.
 F. Sha, Y. Lin, L. Saul, and D. Lee. Multiplicative updates for nonnegative quadratic programming. Neural Comput., 19(8):2004–2031, 2007.
 F. Sha, L. Saul, and D. Lee. Multiplicative updates for nonnegative quadratic programming in support vector machines. In Advances in Neural Information Processing Systems 15, pages 1041–1048. MIT Press, 2002.
 L. Xu and M. Jordan. On convergence properties of the EM algorithm for gaussian mixtures. Neural Computation, 8:129–151, 1995.
 R. Zdunek and A. Cichocki. Nonnegative matrix factorization with quadratic programming. Neurocomput., 71:2309–2320, June 2008.