Online Optimization for LargeScale MaxNorm Regularization
Abstract
Maxnorm regularizer has been extensively studied in the last decade as it promotes an effective lowrank estimation for the underlying data. However, such maxnorm regularized problems are typically formulated and solved in a batch manner, which prevents it from processing big data due to possible memory budget. In this paper, hence, we propose an online algorithm that is scalable to largescale setting. Particularly, we consider the matrix decomposition problem as an example, although a simple variant of the algorithm and analysis can be adapted to other important problems such as matrix completion. The crucial technique in our implementation is to reformulating the maxnorm to an equivalent matrix factorization form, where the factors consist of a (possibly overcomplete) basis component and a coefficients one. In this way, we may maintain the basis component in the memory and optimize over it and the coefficients for each sample alternatively. Since the memory footprint of the basis component is independent of the sample size, our algorithm is appealing when manipulating a large collection of samples. We prove that the sequence of the solutions (i.e., the basis component) produced by our algorithm converges to a stationary point of the expected loss function asymptotically. Numerical study demonstrates encouraging results for the efficacy and robustness of our algorithm compared to the widely used nuclear norm solvers.
Keywords: LowRank Matrix, MaxNorm, Stochastic Optimization, Matrix Factorization
1 Introduction
In the last decade, estimating lowrank matrices has attracted increasing attention in the machine learning community owing to its successful applications in a wide range of fields including subspace clustering [LLY10], collaborative filtering [FSS12] and robust dimensionality reduction [CLMW11], to name a few. Suppose that we are given an observed data matrix in , i.e., observations in ambient dimensions, we aim to learn a prediction matrix with a lowrank structure so as to approximate the observation. This problem, together with its many variants, typically involves minimizing a weighted combination of the residual error and a penalty for the matrix rank.
Generally speaking, it is intractable to optimize a matrix rank [RFP10]. To tackle this challenge, researchers suggested alternative convex relaxations to the matrix rank. The two most widely used convex surrogates are the nuclear norm^{1}^{1}1Also known as the trace norm, the KyFan norm and the Schatten norm. [RFP10] and the maxnorm (a.k.a. norm) [SRJ04]. The nuclear norm is defined as the sum of the matrix singular values. Like the norm in the vector case that induces sparsity, the nuclear norm was proposed as a rank minimization heuristic and was able to be formulated as a semidefinite programming (SDP) problem [FHB01]. By combining the SDP formulation and the matrix factorization technique, [SRJ04] showed that the collaborative filtering problem can be effectively solved by optimizing a soft margin based program. Another interesting work of the nuclear norm comes from the data compression community. In realworld applications, due to possible sensor failure and background clutter, the underlying data can be easily corrupted. In this case, estimation produced by Principal Component Analysis (PCA) may be deviated far from the true subspace [Jol05]. To handle the (gross) corruption, in the seminal work of [CLMW11], Candès et al. proposed a new formulation called Robust PCA (RPCA), and proved that under mild conditions, solving a convex optimization problem consisting of a nuclear norm regularization and a weighted norm penalty can exactly recover the lowrank component of the underlying data even if a constant fraction of the entries are arbitrarily corrupted. Notably, they also provided a range of the tradeoff parameter which guarantees the exact recovery.
The maxnorm variant was developed as another convex relaxation to the rank function [SRJ04], where Srebro et al. formulated the maxnorm regularized problem as an SDP and empirically showed the superiority to the nuclear norm. The main theoretical study on the maxnorm comes from [SS05], where Srebro and Shraibman considered collaborative filtering as an example and proved that the maxnorm schema enjoys a lower generalization error than the nuclear norm. Following these theoretical foundations, [JS12] improved the error bound for the clustering problem. Another important contribution from [JS12] is that they partially characterized the subgradient of the maxnorm, which is a hard mathematical entity and cannot be fully understood to date. However, since SDP solver is not scalable, there is a large gap between the theoretical progress and the practical applicability of the maxnorm. To bridge the gap, a number of followup works attempted to design efficient algorithms to solve maxnorm regularized or constrained problems. For example, [RS05] devised a gradientbased optimization method and empirically showed promising results on large collaborative filtering datasets. [LRS10] presented largescale optimization methods for maxnorm constrained and maxnorm regularized problems and showed a convergence to stationary point.
Nevertheless, algorithms presented in prior works [SRJ04, RS05, LRS10, OAS12] require to access all the data when the objective function involves a maxnorm regularization. In the largescale setting, the applicability of such batch optimization methods will be hindered by the memory bottleneck. In this paper, henceforth, we propose an online algorithm to solve maxnorm regularized problems. The main advantage of online algorithms is that the memory cost is independent of the sample size, which makes it a good fit for the big data era.
To be more detailed, we are interested in a general maxnorm regularized matrix decomposition (MRMD) problem. Assume that the observed data matrix can be decomposed into a lowrank component and some structured noise , we aim to simultaneously and accurately estimate the two components, by solving the following convex program:
(1.1) 
Here, denotes the Frobenius norm which is a commonly used metric for evaluating the residual, is the maxnorm (which promotes lowrankness), and and are two nonnegative parameters. is some (convex) regularizer that can be adapted to various kinds of noise. We require that it can be represented as a summation of column norms. Formally, there exists some regularizer , such that
(1.2) 
where is the th column of . Admissible examples include:

. That is, the norm of the matrix seen as a long vector, which is used to handle sparse corruption. In this case, is the vector norm. Note that when equipped with this norm, the above problem reduces to the wellknown RPCA formulation [CLMW11], but with the nuclear norm replaced by the maxnorm.

. This is defined as the summation of the column norms, which is effective when a small fraction of the samples are contaminated (recall that each column of is a sample). Here, is the norm. The matrix norm is typically used to handle outliers and interestingly, the above program becomes Outlier PCA [XCM13] in this case.
Hence, (MRMD) (1.1) is general enough and our algorithmic and theoretical results hold for such general form, covering important problems including maxnorm regularized RPCA, maxnorm regularized Outlier PCA and large maximum margin matrix factorization. Furthermore, with a careful design, the above formulation (1.1) can be extended to address the matrix completion problem [CR09], as we will show in Section 5.
1.1 Contributions
In summary, our main contributions are twofold: 1) We are the first to develop an online algorithm to solve a family of maxnorm regularized problems (1.1), which finds a wide range of applications in machine learning. We also show that our approach can be used to solve other popular maxnorm regularized problems such as matrix completion. 2) We prove that the solutions produced by our algorithm converge to a stationary point of the expected loss function asymptotically (see Section 4).
Compared to our earlier work [SXL14], the formulation (1.1) considered here is more general and a complete proof is provided. In addition, we illustrate by an extensive study on the subspace recovery task to confirm the conjecture that the maxnorm always performs better than the nuclear norm in terms of convergence rate and robustness.
1.2 Related Works
Here we discuss some relevant works in the literature. Most previous works on maxnorm focused on showing that it is empirically superior to the nuclear norm in realworld problems, such as collaborative filtering [SRJ04], clustering [JS12] and hamming embedding [NMS14]. Other works, for instance, [SS10], studied the influence of data distribution with the maxnorm regularization and observed good performance even when the data are sampled nonuniformly. There are also interesting works which investigated the connection between the maxnorm and the nuclear norm. A comprehensive study on this problem, in the context of collaborative filtering, can be found in [SS05], which established and compared the generalization bound for the nuclear norm regularization and the maxnorm, showing that the latter one results in a tighter bound. More recently, [FSS12] attempted to unify them to gain insightful perspective.
Also in line with this work is matrix decomposition. As we mentioned, when we penalize the noise with matrix norm, it reverts to the well known RPCA formulation [CLMW11]. The only difference is that [CLMW11] analyzed the RPCA problem with the nuclear norm, while (1.1) employs the maxnorm. Owing to the explicit form of the subgradient of the nuclear norm, [CLMW11] established a dual certificate for the success of their formulation, which facilitates their theoretical analysis. In contrast, the maxnorm is a much harder mathematical entity (even its subgradient has not been fully characterized). Henceforth, it still remains challenging to understand the behavior of the maxnorm regularizer in the general setting (1.1). Studying the conditions for the exact recovery of MRMD is out of the scope of this paper. We leave this as a future work.
From a high level, the goal of this paper is similar to that of [FXY13]. Motivated by the celebrated RPCA problem [CLMW11, XCM13, XCS12], [FXY13] developed an online implementation for the nuclearnorm regularized matrix decomposition. Yet, since the maxnorm is a more complicated mathematical entity, new techniques and insights are needed in order to develop online methods for the maxnorm regularization. For example, after converting the maxnorm to its matrix factorization form, the data are still coupled and we propose to transform the problem to a constrained one for stochastic optimization.
The main technical contribution of this paper is converting maxnorm regularization to an appropriate matrix factorization problem amenable to online implementation. Compared to [MBPS10] which also studies online matrix factorization, our formulation contains an additional structured noise that brings the benefit of robustness to contamination. Some of our proof techniques are also different. For example, to prove the convergence of the dictionary and to well define their problem, [MBPS10] assumed that the magnitude of the learned dictionary is constrained. In contrast, we prove that the optimal basis is uniformly bounded, and hence our problem is naturally welldefined.
1.3 Roadmap
The rest of the paper is organized as follows. Section 2 begins with some basic notation and problem definition, followed by reformulating the MRMD problem which turns out to be amenable for online optimization. Section 3 then elaborates the online implementation of MRMD and Section 4 establishes the convergence guarantee under some mild assumptions. In Section 5, we show that our framework can easily be extended to other maxnorm regularized problems, such as matrix completion. Numerical performance of the proposed algorithm is presented in Section 6. Finally, we conclude this paper in Section 7. All the proofs are deferred to the appendix.
2 Problem Setup
Notation. We use lower bold letters to denote vectors. The norm and norm of a vector are denoted by and , respectively. Capital letters, such as , are used to denote matrices. In particular, the letter is reserved for the identity matrix with the size of by . For a matrix , the th row and th column are written as and respectively, and the th entry is denoted by . There are four matrix norms that will be heavily used in the paper: for the Frobenius norm, for the matrix norm seen as a long vector, for the maxnorm induced by the product of norm on the factors of . Here, the norm is defined as the maximum row norm. The trace of a square matrix is denoted as . Finally, for a positive integer , we use to denote the integer set .
We are interested in developing an online algorithm for the MRMD problem (1.1). To this end, we note that the maxnorm [SRJ04] is defined as follows:
(2.1) 
where is an upper bound on the intrinsic dimension of the underlying data. Plugging the above into (1.1), we obtain an equivalent form:
(2.2) 
In this paper, if not specified, “equivalent” means we do not change the optimal value of the objective function. Intuitively, the variable serves as a (possibly overcomplete) basis for the clean data while correspondingly, the variable works as a coefficients matrix with each row being the coefficients for each sample (recall that we organize the observed samples in a columnwise manner). In order to make the new formulation (2.2) equivalent to MRMD (1.1), the quantity of should be sufficiently large due to (2.1).
Challenge. At a first sight, the problem can only be optimized in a batch manner for which the memory cost is prohibitive. To see this, note that we are considering the regime of . Hence, the basis component is eligible for memory storage since its size is independent of the number of samples. As is columnwisely regularized (see (1.2)), we are able to update each column of by only accessing one sample. Nevertheless, the size of the coefficients is proportional to . In order to optimize the above program over the variable , we have to compute the gradient with respect to it. Recall that the norm counts the largest row norm of , hence coupling all the samples (each row of associates with a sample).
Fortunately, we have the following proposition that alleviates the interdependency among the rows of , hence facilitating an online algorithm where the rows of can be optimized sequentially.
Proposition 1.
Problem (2.2) is equivalent to the following constrained program:
(2.3) 
Moreover, there exists an optimal solution attained at the boundary of the feasible set, i.e., is equal to the unit.
Proof.
Let us denote . We presume is positive. Otherwise, the lowrank component we aim to recover is a zero matrix, which is of little interest. Now we construct two auxiliary variables and . Replacing and with and in (2.2) respectively, we have:
That is, we are to solve
The fact that and is the maximum of the row norm of implies . Therefore, we can reformulate our MRMD problem as a constrained program:
To see why the above program is equivalent to (2.3), we only need to show that each optimal solutions of (2.3) must satisfy . Suppose that . Let and . Obviously, are still feasible. However, the objective value becomes
which contradicts the assumption that is the optimal solution. Thus we complete the proof. ∎
Remark 2.
Consequently, we can, equipped with Proposition 1, rewrite the original problem in an online fashion, with each sample being separately processed:
(2.4) 
where is the th observation, is the coefficients and is some structured error penalized by the (convex) regularizer (recall that we require can be decomposed columnwisely). Merging the first and third term above gives a compact form:
(2.5) 
where
(2.6) 
This is indeed equivalent to optimizing (i.e., minimizing) the empirical loss function:
(2.7) 
where
(2.8) 
and
(2.9) 
Note that by Proposition 1, as long as the quantity of is sufficiently large, the program (2.7) is equivalent to the primal formulation (1.1), in the sense that both of them could attain the same minimum. Compared to MRMD (1.1), which is solved in a batch manner by prior works, the formulation (2.7) paves a way for stochastic optimization procedure since all the samples are decoupled.
3 Algorithm
Based on the derivation in the preceding section, we are now ready to present our online algorithm to solve the MRMD problem (1.1). The implementation is outlined in Algorithm 1. Here we first briefly explain the underlying intuition. We optimize the coefficients , the structured noise and the basis in an alternating manner, with only the basis and two accumulation matrices being kept in memory. At the th iteration, given the basis produced by the previous iteration, we can optimize (2.9) by examining the Karush Kuhn Tucker (KKT) conditions. To obtain a new iterate , we then minimize the following objective function:
(3.1) 
where and are already on hand. It can be verified that (3.1) is a surrogate function of the empirical loss (2.8), since the obtained ’s and ’s are suboptimal. Interestingly, instead of recording all the past ’s and ’s, we only need to store two accumulation matrices whose sizes are independent of , as shown in Algorithm 1. In the sequel, we elaborate each step in Algorithm 1.
3.1 Update the coefficients and noise
Given a sample and a basis , we are able to estimate the optimal coefficients and noise by minimizing . That is, we are to solve the following program:
(3.2) 
We notice that the constraint only involves the variable , and in order to optimize , we only need to consider the residual term in the objective function. This motivates us to employ a block coordinate descent algorithm. Namely, we alternatively optimize one variable with the other fixed, until some stopping criteria is fulfilled. In our implementation, when the difference between the current and the previous iterate is smaller than , or the number of iterations exceeds 100, our algorithm will stop and return the optima.
3.1.1 Optimize the coefficients
Now it remains to show how to compute a new iterate for one variable when the other one is fixed. According to [Ber99], when the objective function is strongly convex with respect to (w.r.t.) each block variable, it can guarantee the convergence of the alternating minimization procedure. In our case, we observe that such condition holds for but not necessary for . In fact, the strong convexity for holds if and only if the basis is with full rank. When is not full rank, we may compute the Moore Penrose pseudo inverse to solve . However, for computational efficiency, we append a small jitter to the objective if necessary, so as to guarantee the convergence ( in our experiments). In this way, we obtain a potentially admissible iterate for as follows:
(3.3) 
Here, is set to be zero if and only if is full rank.
Next, we examine if violates the inequality constraint in (3.2). If it happens to be a feasible solution, i.e., , we have found the new iterate for . Otherwise, we conclude that the optima of must be attained on the boundary of the feasible set, i.e., , for which the minimizer can be found by the method of Lagrangian multipliers:
(3.4) 
By differentiating the objective function with respect to , we have
(3.5) 
In order to facilitate the computation, we make the following argument.
Proposition 3.
Let be given by (3.5), where , and are assumed to be fixed. Then, the norm of is strictly monotonically decreasing with respect to the quantity of .
Proof.
For simplicity, let us denote
where is a fixed vector. Suppose we have a full singular value decomposition (SVD) on , where the singular values (i.e., the diagonal elements in ) are arranged in a decreasing order and at most number of them are nonzero. Substituting with its SVD, we obtain the squared norm for :
where is a diagonal matrix whose th diagonal element equals .
For any two entities , it is easy to see that the matrix is negative definite. Hence, it always holds that
which concludes the proof. ∎
The above proposition offers an efficient computational scheme, i.e., bisection method, for searching the optimal as well as the dual variable . To be more detailed, we can maintain a lower bound and an upper bound , such that and . According to the monotonic property shown in Proposition 3, the optimal must fall into the interval . By evaluating the value of at the middle point , we can sequentially shrink the interval until is close to one. Note that we can initialize with zero (since is larger than one implying the optimal ). The bisection routine is summarized in Algorithm 2.
3.1.2 Optimize the Noise
We have clarified the technique used for solving in Problem (3.2) when is fixed. Now let us turn to the phase where is fixed and we want to find the optimal . Since is an unconstrained variable, generally speaking, it is much easier to solve, although one may employ different strategies for various regularizers . Here, we discuss the solutions for popular choices of the regularizer.
Finally, for completeness, we summarize the routine for updating the coefficients and the noise in Algorithm 3. The readers may refer to the preceding paragraphs for details.
3.2 Update the basis
With all the past filtration on hand, we are able to compute a new basis by minimizing the surrogate function (3.1). That is, we are to solve the following program:
(3.8) 
By a simple expansion, for any , we have
(3.9) 
Substituting back into Problem (3.8), putting , and removing constant terms, we obtain
(3.10) 
In order to derive the optimal solution, firstly, we need to characterize the subgradient of the squared norm. In fact, let be a positive semidefinite diagonal matrix, such that . Denote the set of row index which attains the maximum row norm of by . In this way, the subgradient of can be formalized as follows:
(3.11) 
Equipped with the subgradient, we may apply block coordinate descent to update each column of sequentially. We assume that the objective function (3.10) is strongly convex w.r.t. , implying that the block coordinate descent scheme can always converge to the global optimum [Ber99].
We summarize the update procedure in Algorithm 4. In practice, we find that after revealing a large number of samples, performing onepass updating for each column of is sufficient to guarantee a desirable accuracy, which matches the observation in [MBPS10].
3.3 Memory and Computational Cost
As one of the main contributions of this paper, our OMRMD algorithm (i.e., Algorithm 1) is appealing for largescale problems (the regime ) since the memory cost is independent of . To see this, note that when computing the optimal coefficients and noise, only and are accessed, which cost . To store the accumulation matrix , we need memory while that for is . Finally, we find that only and are needed for the computation of the new iterate . Therefore, the total memory cost of OMRMD is , i.e., independent of . In contrast, the SDP formulation introduced by [SRJ04] requires memory usage, the localsearch heuristic algorithm [RS05] needs and no convergence guarantee was derived. Even for a recently proposed algorithm [LRS10], they require to store the entire data matrix and thus the memory cost is .
In terms of computational efficiency, our algorithm can be fast, although this is not the main point of this work. One may have noticed that the computation is dominated by solving Problem (3.2). The computational complexity of (3.5) involves an inverse of a matrix followed by a matrixmatrix and a matrixvector multiplication, totally . For the basis update, obtaining a subgradient of the squared norm is since we need to calculate the norm for all rows of followed by a multiplication with a diagonal matrix (see (3.11)). A onepass update for the columns of , as shown in Algorithm 4 costs . Thus, the computational complexity of OMRMD is . Note that the quadratic dependence on is acceptable in most cases since is the estimated rank and hence typically much smaller than .
4 Theoretical Analysis and Proof Sketch
In this section we present our main theoretical result regarding the validity of the proposed algorithm. We first discuss some necessary assumptions.
4.1 Assumptions

The observed samples are independent and identically distributed (i.i.d.) with a compact support . This is a very common scenario in realworld applications.

The minimizer for Problem (2.9) is unique. Notice that is strongly convex w.r.t. and convex w.r.t. . We can enforce this assumption by adding a jitter to the objective function, where is a small positive constant.
4.2 Main Results
It is easy to see that Algorithm 1 is devised to optimize the empirical loss function (2.8). In stochastic optimization, we are mainly interested in the expected loss function, which is defined as the averaged loss incurred when the number of samples goes to infinity. If we assume that each sample is independently and identically distributed (i.i.d.), we have
(4.1) 
The main theoretical result of this work is stated as follows.
Theorem 4 (Convergence to a stationary point of the expected loss function).
Remark 5.
The theorem establishes the validity of our algorithm. Note that on one hand, the transformation (2.1) facilitates an amenable way for the online implementation of the maxnorm. On the other hand, due to the nonconvexity of our new formulation (2.3), it is generally hard to desire a local, or even a global minimizer [Ber99]. Although Burer and Monteiro [BM05] showed that any local minimum of an SDP is also the global optimum under some conditions (note that the maxnorm problem can be transformed to an SDP [SRJ04]), it is hard to determine if a solution is a local optima or a stationary point. From the empirical study in Section 6, we find that the solutions produced by our algorithm always converge to the global optima when the samples are independently and identically drawn from a Gaussian distribution. We leave further analysis on the rationale as our future work.
4.3 Proof Outline
The essential tools for our analysis are from stochastic approximation [Bot98] and asymptotic statistics [VdV00]. There are four key stages in our proof and one may find the full proof in Appendix A.
Stage I. We first show that all the stochastic variables are uniformly bounded. The property is crucial because it justifies that the problem we solve is welldefined. Also, the uniform boundedness will be heavily used for deriving subsequent important results (e.g., the Lipschitz of the surrogate) to establish our main theorem.
Proposition 6 (Uniform bound of all stochastic variables).
Let be the sequence of optimal solutions produced by Algorithm 1. Then,

For any , the optimal solutions and are uniformly bounded.

For any , the accumulation matrices and are uniformly bounded.

There exists a compact set , such that for any , we have .
Proof.
(Sketch) The uniform bound of follows by constructing a trivial solution for (2.6), which results in an upper bound for the optimum of the objective function. Notably, the upper bound here only involves a quantity on , which is assumed to be uniformly bounded. Since is always upper bounded by the unit, the first claim follows. The second claim follows immediately by combining the first claim and Assumption 1. In order to show is uniformly bounded, we utilize the first order optimality condition of the surrogate (3.1). Since is positive definite, we can represent in terms of , and the inverse of , where is the subgradient, whose Frobenius norm is in turn bounded by that of . Hence, it follows that can be uniformly bounded. ∎
Remark 7.
Corollary 8 (Uniform bound and Lipschitz of the surrogate).
Stage II. We next present that the positive stochastic process converges almost surely. To establish the convergence, we verify that is a quasimartingale [Bot98] that converges almost surely. To this end, we show that the expectation of the discrepancy of and can be upper bounded by a family of functions indexed by . Then we show that the family of the functions is PDonsker [VdV00], the summands of which concentrate around its expectation within an ball almost surely. Therefore, we conclude that is a quasimartingale and converges almost surely.
Proposition 9.
Let and denote the minimizer of as:
Then, the function defined in Problem (2.9) is continuously differentiable and
Furthermore, is uniformly Lipschitz over the compact set .
Proof.
The gradient of follows from Lemma 20. Since each term of is uniformly bounded, we conclude the uniform Lipschitz property of w.r.t. .
∎
Corollary 10 (Uniform bound and Lipschitz of the empirical loss).
Let be the empirical loss function defined in (2.8). Then is uniformly bounded and Lipschitz over the compact set .
Corollary 11 (PDonsker of ).
The set of measurable functions is PDonsker (see definition in Lemma 19).
Proposition 12 (Concentration of the empirical loss).
Proof.
Since is uniformly upper bounded (Corollary 8) and is always nonnegative, its square is uniformly upper bounded, hence its expectation. Combining Corollary 11, Lemma 19 applies.
∎
Theorem 13 (Convergence of the surrogate).
Proof.
Stage III. Then we prove that the sequence of the empirical loss function, defined in (2.8) converges almost surely to the same limit of its surrogate . According to the central limit theorem, we assert that also converges almost surely to the expected loss defined in (4.1), implying that and converge to the same limit almost surely.
We first show the numerical convergence of the basis sequence , based on which we show the convergence of by applying Lemma 26.
Proposition 14 (Numerical convergence of the basis component).
Let be the basis sequence produced by the Algorithm 1. Then,
(4.2) 
Theorem 15 (Convergence of the empirical and expected loss).
Let be the sequence of the expected loss where be the sequence of the solutions produced by the Algorithm 1. Then, we have

The sequence of the empirical loss converges almost surely to the same limit of the surrogate.

The sequence of the expected loss converges almost surely to the same limit of the surrogate.
Proof.
Let . We show that infinite series is bounded by applying the central limit theorem to and the result of Theorem 13. We further prove that can be bounded by , due to the uniform boundedness and Lipschitz of , and . According to Lemma 26, we conclude the convergence of to zero. Hence the first claim. The second claim follows immediately owing to the central limit theorem. ∎
Final Stage. According to Claim 2 of Theorem 15 and the fact that 0 belongs to the subgradient of evaluated at , we are to show the gradient of taking at vanishes as tends to infinity, which establishes Theorem 4. To this end, we note that since is uniformly bounded, the nondifferentiable term vanishes as goes to infinity, implying the differentiability of , i.e. . On the other hand, we show that the gradient of and that of are always Lipschitz on the compact set , implying the existence of their second order derivative even when . Thus, by taking a first order Taylor expansion and let go to infinity, we establish the main theorem.
5 Connection to Matrix Completion
While we mainly focus on the matrix decomposition problem, our method can be extended to the matrix completion (MC) problem [CCS10, CR09] with maxnorm regularization [CZ13] – another popular topic in machine learning and signal processing. The maxnorm regularized MC problem can be described as follows:
where is the set of indices of observed entries in and is the orthogonal projection onto the span of matrices vanishing outside of so that the th entry of is equal to if and zero otherwise. Interestingly, the maxnorm regularized MC problem can be cast into our framework. To see this, let us introduce an auxiliary matrix , with if and otherwise. The reformulated MC problem,
(5.1) 
where “” denotes the entrywise product, is comparable to our MRMD formulation (1.1). And it is easy to show that when tends to infinity, the reformulated problem converges to the original MC problem.
5.1 Online Implementation
We now derive a stochastic implementation for the maxnorm regularized MC problem. Note that the only difference between the Problem (5.1) and Problem (1.1) is the regularization on , which results a new penalty on for (which is originally defined in (2.6)):
(5.2) 
Here, is a column of the matrix in (5.1). According to the definition of , is a vector with element value being either or . Let us define two support sets as follows:
where is the th element of vector . In this way, the newly defined can be written as
(5.3) 
Notably, as and are disjoint, given , and , the variable in (5.3) can be optimized by softthresholding in a separate manner:
(5.4) 
With this rule on hand, we propose Algorithm 5 for the online maxnorm regularized matrix completion (OMRMC) problem. The update rule for is the same as we described in Algorithm 3 and that for is given by (5.4). Note that we can use Algorithm 4 to update as usual.
Since we have clarified the algorithm for OMRMC, we move to the theoretical analysis. We argue that all the results for OMRMD apply to OMRMC, which can be trivially justified.
6 Experiments
In this section, we report numerical results on synthetic data to demonstrate the effectiveness and robustness of our online maxnorm regularized matrix decomposition (OMRMD) algorithm. Some experimental settings are used throughout this section, as elaborated below.
Data Generation. The simulation data are generated by following a similar procedure in [CLMW11]. The clean data matrix is produced by , where and . The entries of and are i.i.d. sampled from the normal distribution . We choose sparse corruption in the experiments, and introduce a parameter to control the sparsity of the corruption matrix , i.e., a fraction of the entries are nonzero and following an i.i.d. uniform distribution over . Finally, the observation matrix is produced by .
Baselines. We mainly compare with two methods: Principal Component Pursuit (PCP) and online robust PCA (ORPCA). PCP is the stateoftheart batch method for subspace recovery, which was presented as a robust formulation of PCA in [CLMW11]. ORPCA is an online implementation of PCP,^{2}^{2}2Strictly speaking, ORPCA is an online version of stable PCP [ZLW10]. which also achieves stateoftheart performance over the online subspace recovery algorithms. Sometimes, to show the robustness, we will also report the results of online PCA [AJL02], which incrementally learns the principal components without taking the noise into account.
Evaluation Metric. Our goal is to estimate the correct subspace for the underlying data. Here, we evaluate the fitness of our estimated subspace basis and the ground truth basis by the Expressed Variance (EV) [XCM10]:
(6.1) 
The values of EV range in and a higher value indicates a more accurate recovery.
Other Settings. Throughout the experiments, we set the ambient dimension and the total number of samples unless otherwise specified. We fix the tunable parameter , and use default parameters for all baselines we compare with. Each experiment is repeated 10 times and we report the averaged EV as the result.
6.1 Robustness
We first study the robustness of OMRMD, measured by the EV value of its output after accessing the last sample, and compare it to the nuclear norm based ORPCA and the batch algorithm PCP. In order to make a detailed examination, we vary the intrinsic dimension from to , with a step size , and the corruption fraction from to , with a step size .
The general results are reported in Figure 1 where a brighter color means a higher EV (hence better performance). We observe that for easy tasks (i.e., when corruption and rank are low), both OMRMD and ORPCA perform comparably. On the other hand, for more difficult cases, OMRMD outperforms ORPCA. In order to further investigate this phenomenon, we plot the EV curve against the fraction of corruption under a given matrix rank. In particular, we group the results into two parts, one with relatively low rank (Figure 2) and the other with middle level of rank (Figure 3). Figure 2 indicates that when manipulating a lowrank matrix, ORPCA works as well as OMRMD under a low level of noise. For instance, the EV produced by ORPCA is as close as that of OMRMD for rank less than 40 and no more than 0.26. However, when the rank becomes larger, ORPCA degrades quickly compared to OMRMD. This is possibly because the maxnorm is a tighter approximation to the matrix rank. Since PCP is a batch formulation and accesses all the data in each iteration, it always achieves the best recovery performance.
6.2 Convergence Rate
We next study the convergence of OMRMD by plotting the EV curve against the number of samples. Besides ORPCA and PCP, we also add online PCA [AJL02] as a baseline algorithm. The results are illustrated in Figure 4. As expected, PCP achieves the best performance since it is a batch method and needs to access all the data throughout the algorithm. Online PCA degrades significantly even with low corruption (Figure (a)a). OMRMD is comparable to ORPCA when the corruption is low (Figure (a)a), and converges significantly faster when the corruption is high (Figure (c)c and (d)d). This observation agrees with Figure 1, and again suggests that for large corruption, maxnorm may be a better fit than the nuclear norm.
Indeed, it is true that OMRMD converges much faster even in large scale problems. In Figure 5, we compare the convergence rate of OMRMD and ORPCA under different ambient dimensions. The intrinsic dimensions are set with , indicating a lowrank structure of the underlying data. The error corruption is fixed with – a difficult task for recovery. We observe that for high dimensional cases ( and ), OMRMD significantly outperforms ORPCA. For example, in Figure (b)b, OMRMD achieves the EV value of only with accessing about 2000 samples, while ORPCA needs to access samples to obtain the same accuracy!
6.3 Computational Complexity
We note that our OMRMD is a bit inferior to ORPCA in terms of computation in each iteration, as our algorithm may solve a dual problem to optimize (see Algorithm 3). Therefore, our algorithm will spend more time to process an instance if the initial solution violates the constraint. We plot the EV curve with respect to the running time in Figure 6. It shows that basically, ORPCA is about times faster than OMRMD per sample. However, we point out here that we mainly emphasize on the convergence rate. That is, given an EV value, how much time the algorithm will cost to achieve it. In Figure (c)c, for example, OMRMD takes minutes to achieve the EV value of 0.6, while ORPCA uses nearly minutes. From Figure 5 and Figure 6, it is safe to say that OMRMD is superior to ORPCA in terms of convergence rate in the price of a little more computation per sample.
7 Conclusion
In this paper, we have developed an online algorithm for maxnorm regularized matrix decomposition problem. Using the matrix factorization form of the maxnorm, we converted the original problem to a constrained one which facilitates an online implementation for solving the batch problem. We have established theoretical guarantees that the solutions will converge to a stationary point of the expected loss function asymptotically. Moreover, we empirically compared our proposed algorithm with ORPCA, which is a recently proposed online algorithm for nuclearnorm based matrix decomposition. The simulation results have suggested that the proposed algorithm is more robust than ORPCA, in particular for hard tasks (i.e., when a large fraction of entries are corrupted). We also have investigated the convergence rate for both OMRMD and ORPCA, and have shown that OMRMD converges much faster than ORPCA even in largescale problems. When acquiring sufficient samples, we observed that our algorithm converges to the batch method PCP, which is a stateoftheart formulation for subspace recovery. Our experiments, to an extent, suggest that the maxnorm might be a tighter relaxation of the rank function compared to the nuclear norm.
Appendix A Proof Details
a.1 Proof for Stage I
First we prove that all the stochastic variables are uniformly bounded.
Proposition 16.
Let , and be the optimal solutions produced by Algorithm 1. Then,

The optimal solutions and are uniformly bounded.

The matrices and