Accurate Optimization of Weighted Nuclear Norm for NonRigid Structure from Motion
Abstract
Fitting a matrix of a given rank to data in a least squares sense can be done very effectively using 2nd order methods such as LevenbergMarquardt by explicitly optimizing over a bilinear parameterization of the matrix. In contrast, when applying more general singular value penalties, such as weighted nuclear norm priors, direct optimization over the elements of the matrix is typically used. Due to nondifferentiability of the resulting objective function, first order subgradient or splitting methods are predominantly used. While these offer rapid iterations it is well known that they become inefficent near the minimum due to zigzagging and in practice one is therefore often forced to settle for an approximate solution.
In this paper we show that more accurate results can in many cases be achieved with 2nd order methods. Our main result shows how to construct bilinear formulations, for a general class of regularizers including weighted nuclear norm penalties, that are provably equivalent to the original problems. With these formulations the regularizing function becomes twice differentiable and 2nd order methods can be applied. We show experimentally, on a number of structure from motion problems, that our approach outperforms stateoftheart methods.
1 Introduction
Matrix recovery problems of the form
(1) 
where is a linear operator and are the singular values of , are frequently occurring in computer vision. Applications range from high level 3D reconstruction problems to low level pixel manipulations [36, 4, 40, 14, 2, 13, 38, 9, 28, 20, 15]. In structure from motion the most common approaches enforce a given low rank without additionally penalizing nonzero singular values [36, 4, 17]. (This can be viewed as a special case of (1) by letting assign zero if fewer than singular values are nonzero and infinity otherwise.)
Since the rank of a matrix is bounded by to the number of columns/rows in a bilinear parameterization , the resulting optimization problem can be written
(2) 
This gives a smooth objective function and can therefore be optimized using 2nd order methods. In structure from motion problems, where the main interest is the extraction of camera matrices from and 3D points from , this is typically the preferred option [5]. In a series of recent papers Hong et al. showed that optimization with the VarPro algorithm is remarkably robust to local minima converging to accurate solutions [17, 18, 19]. In [21] they further showed how uncalibrated rigid structure from motion with a proper perspective projection can be solved within a factorization framework. On the downside the accuracy of these methods comes with a price. Typically the iterations are costly since (even when the Schur complement trick is used) 2nd order methods require an inversion of a relatively large hessian matrix, which may hinder application when suitable sparsity patterns are not present.
For low level vision problems such as denoising and inpainting, eg. [28, 20, 15], the main interest is to recover the elements of and not the factorization. In this context more general regularization terms that also consider the size of the singular values are often used. Since the singular values are nondifferentiable functions of the elements in first order methods are usually employed. The simplest option is perhaps a splitting methods such as ADMM [3] since the proximal operator
(3) 
can often be computed in closed form [20, 15, 28, 11, 24]. Alternatively, subgradient methods can be used to handle the nondifferentiability of the regularization term [9].
It is well known that while first order methods have rapid iterations and make large improvements the first couple of iterations they have a tendency to converge slowly when approaching the optimum. For example, [3] recommends to use ADMM when a solution in the vicinity of the optimal point is acceptable, but suggests to switch to a higher order method when high accuracy is desired. For low level vision problems where success is not dependent on achieving an exact factorization of a particular size, first order methods may therefore be suitable. In contrast, in the context of structure from motion, having roughly estimated elements in causes the obtained factorization , to be of a much larger size than necessary yielding poor reconstructions with too many deformation modes.
In this paper we aim to extend the class of methods that can be optimized using bilinear parameterization allowing accurate estimation of a low rank factorization from a general class of regularization terms. While our theory is applicable for many objectives we focus on weighted nuclear norm penalties since these have been successfully used in structure form motion applications. We show that these can be optimized with 2nd order methods which significantly increases the accuracy of the reconstruction. We further show that with these improvements the model of Hong et al. [21] can be extended to handle nonrigid reconstruction with a proper perspective model, as opposed to the orthographic projection model adopted by other factorization based approaches, e.g. [24, 11, 14, 40].
1.1 Related Work and Contributions
Minimization directly over has been made popular since the problem is convex when is convex and absolutely symmetric, that is, and , where is any permutation [26]. Convex penalties are however of limited interest since they generally prefer solutions with many small nonzero singular values to those with few large ones. A notable exception is the nuclear norm [12, 32, 31, 7, 8] which penalizes the sum of the singular values. Under the RIP assumption [32] exact or approximate low rank matrix recovery can then be guaranteed [32, 8]. On the other hand, since the nuclear norm penalizes large singular values, it suffers from a shrinking bias [6, 9, 25].
An alternative approach that unifies bilinear parameterization with regularization approaches is based on the observation [32] that the nuclear norm of a matrix can be expressed as Thus when , where is a scalar controlling the strength of the regularization, optimization of (1) can be formulated as
(4) 
Optimizing directly over the factors has the advantages that the number of variables is much smaller and the objective function is two times differentiable so second order methods can be employed. While (4) is nonconvex because of the bilinear terms, the convexity of the nuclear norm can still be used to show that any local minimizer , with , where is the number of columns in and , is globally optimal [1, 16]. The formulation (4) was for vision problems in [6]. In practice it was observed that the shrinking bias of the nuclear norm makes it too weak to enforce a low rank when the data is noisy. Therefore, a “continuation” approach where the size of the factorization is gradually reduced was proposed. While this yields solutions with lower rank, the optimality guarantees no longer apply. Bach et al. [1] showed that
(5) 
where , are the th columns of and respectively, is convex for any choice of vector norms and . In [16] it was shown that a more general class of 2homogeneous factor penalties result in a convex regularization similar to (5). The property that a local minimizer , with , is global is also extended to this case. Still, because of convexity, it is clear that these formulations will suffer from a similar shrinking bias as the nuclear norm.
One way of reducing shrinking bias is to use penalties that are constant for large singular values. Shang et al. [34] showed that penalization with the Schatten seminorms , for and , can be achieved using a convex penalty on the factors and . A generalization to general values of is given in [39]. An algorithm that address a general class of penalties for symmetric matrices is presented in [23]. In [30] it was shown that if is separable with the same penalty for each singular value, that is, , where is differentiable, concave and nondecreasing then (1) can be optimized using second order methods such as LevenbergMarquart or VarPro. This is achieved by reparameterizing the matrix using a bilinear factorization and optimizing
(6) 
Here and . In contrast to the singular value the function is smooth which allows optimization with second order methods. It is shown in [30] that if is optimal in (1) then the factorization , where is the SVD of , is optimal in (6). (Here we assume that is , is and is , with .) Note also that this choice gives .
Having a separable objective with the same penalty for each singular value is however somewhat restrictive. An alternative way of reducing bias is to reweight the nuclear norm and use [20, 15, 24]. Assigning low weights to the first (largest) singular values allows accurate matrix recovery. In addition the weights can be used to regularize the size of the nonzero singular values which has been shown to be an additional useful prior in NRSfM [24]. Note however that the singular values are always ordered in nonincreasing order. Therefore, while the function is linear in the singular values it is in fact nonconvex and nondifferentiable in the elements of whenever the singular values are not distinct (which is typically the case in low rank recovery).
In this paper we show that this type of penalties allow optimization with instead of . In particular we study the optimization problem
(7)  
s.t.  (8) 
and its constraint set for a fixed . We characterize the extremepoints of the feasible set using permutation matrices and give conditions on that ensure that the optimal solution is of the form , where is a permutation. For the weighted nuclear norm we show that if the elements of are nondecreasing the optimal solution has . A simple consequence of this result is that
(9) 
is equivalent to
(10) 
While the latter is nondifferentiable the former is smooth and can be minimized efficiently with second order methods.
Our experimental evaluation confirms that this approach outperforms current first order methods in terms of accuracy as can be expected. On the other hand first order methods make large improvments the first coupler of iterations and therefore we combine the two approaches. We start out with a simple ADMM implementation and switch to our second order approach when only minor progress is being made. Note however that since the original formulation is nonconvex local minima can exist. In addition bilinear parameterization introduces additional stationary points that are not present in the original parameterization. One such example is , where all gradients vanish. Still our experiments show that the combination of these methods often converge to a good solution from random initialization.
2 Bilinear Parameterization Penalties
In this section we will derive a dependence between the singular values and the , when . For ease of notation we will suppress the dependence on and since this will be clear from the context. Let have the SVD , and . We will study other potential factorizations using , and . In this section we will further assume that is a square matrix and therefore is its inverse. (We will generalize the results to the rectangular case in Section 3).
We begin by noting that , where and are columns of and respectively. We have
(11) 
and similarly and therefore . This gives
(12) 
or in matrix form
(13) 
Minimizing (7) over different factorizations is therefore equivalent to solving
(14)  
s.t.  (15) 
where
(16) 
It is clear that , where is any permutation, is feasible in the above problem since permutations are orthogonal. In addition they contain only zeros and ones and therefore it is easy to see that this choice gives . In the next section we will show that these are extreme points of the feasible set, in the sense that they can not be written as convex combinations of other points in the set. Extreme points are important for optimization since the global minimum is guaranteed to be attained (if it exists) in such a point if the objective function has concavity properties. This is for example true if is quasiconcave, that is, the superlevel sets are convex. To see this let , and consider the superlevel set where . Since both and it is clear by convexity that so is and therefore .
2.1 Extreme Points and Optimality
We now consider the optimization problem (14)(15) and a convex relaxation of the constraint set. For this purpose we define the convex set
(17) 
Since we have , where and are the rows of and , and therefore
(18) 
By (18) we thus have that the column sums of are no less than one. (An identical argument shows that the same holds for the row sums.) In addition it is clear that all the elements of are nonnegative, which shows that . Therefore problem
(19)  
s.t.  (20) 
is clearly a relaxation of (14)(15). We will next show that the two problems have the same minimum if a minimizer to (19)(20) exists when the objective function is quasiconcave (on ). As we have seen previously, the minimum (over ) is then attained in an extreme point of . We therefore need the following characterization.
Lemma 2.1.
The extreme points of are the permutation matrices.
Proof.
The two constraint sets and can be written in matrix form as , where is a vector containing the elements of , is a vector of all ones and the matrix is totally unimodular. Therefore the extreme points of
(21) 
are integer valued. Now suppose that is integer valued with an element that is not 0/1. Then we define the two matrices and such that for , and . Then clearly and which shows that is not an extreme point. We conclude the proof by noting that the only matrices that fulfill the constraints are permutations. ∎
Since permutation matrices are orthogonal with elements it is clear they can be written with . Therefore the extreme points of are also in . Hence if the minimum of (19)(20) is attained, there is an optimal extreme point of and which also solves (14)(15), and therefore the solution is given by a permutation .
We conclude this section by giving sufficient conditions for the minimum of (19)(20) to exist, namely that is nondecreasing in all of its variables, that is, if for all then . To see that a minimizer exists in we note that if has for some then this element can be reduced without violating any of the feasibility constraints (21). Additionally, since and reducing also reduces and therefore does not increase the objective value. Hence, we may restrict our search to a compact subset of (21) where .
We can now summarize the results of this section in the following theorem:
3 Nonsquare Matrices
In the previous section we made the assumption that and where square matrices, which corresponds to searching over and consisting of columns when . In addition since and are invertible this means that and have linearly independent columns. In this section we generalize the result from Section 2.1 to rectangular matrices and . Therefore we suppose that and are nonsquare of size , , with , and consider the slightly modified problem
(24)  
Note that do not commute and we therefore only assume that is a left inverse of . In what follows we show that by adding zeros to the vector we can extend , into square matrices without changing the objective function.
Note that we may assume that has full row rank since otherwise . Let be the MoorePenrose pseudo inverse and a matrix containing a basis for the space orthogonal to the row space of (and the column space of ). Since the matrix is of the form , where is a coefficient matrix. We now want to find matrices and such that
(25) 
To do this we first select since . Then we let , where is a size coefficient matrix, since this gives . To determine we consider . Selecting thus gives square matrices such that
(26) 
Further letting shows that and and the results of the previous section give that the minimizer of is a permutation of the elements in the vector .
We therefore have the following result:
4 Linear Objectives  Weighted Nuclear Norms
We now consider weighted nuclear norm regularization . To ensure that the problem is well posed we assume that the elements of are nonnegative. It is then clear that when . Since linearity implies concavity the results of Sections 2.1 and 3 now show that the minimum of , over is attained in for some permutation matrix. To ensure that the bilinear formulation is equivalent to the original one we need to show that the optimum occurs when . Suppose that the elements in are sorted in ascending order . It is easy to see that for to give the smallest objective value it should sort the elements of so that , which means that and . We therefore conclude that minimizing (6) with a linear objective corresponds to regularization with a weighted nuclear norm with nondecreasing weights.
5 Experiments
In this section we start by describing implementation details of our method and then apply it to the problems of low matrix recovery and nonrigid structure recovery. Solving the weighted nuclear norm regularized problem (9) now amounts to minimizing
(27) 
Note that the terms in the objective (27) can be combined into a single norm term by vertically concatenating the vectors and , weighted by , with . We define the resulting vector as , giving the objective , where the subscript reflects the dependence on the weights . Since the objective is smooth, standard methods such as LevenbergMarquardt can be applied and Algorithm 1 shows an overview of the method used. Additional information about the algorithm is provided in the supplementary material.
The remainder of this section is organized as follows. The particular form of the data fitting term in (27) when applied to structure from motion is described in Section 5.1. In Section 5.2 we compare the convergence of first and secondorder methods, and motivated by the ADMM fast iterations but low accuracy, as opposed to the bilinear parameterization’s high accuracy but slower iterations, we combine the two methods by initializing the bilinear parameterization with the ADMM’s solution [3, 24] for a nonrigid structure structure recovery problem.
5.1 Pseudo Object Space Error (pOSE) and NonRigid Structure from Motion
To compare the performance of first and secondorder methods, we choose as objective function the Pseudo Object Space Error (pOSE) [21], which consists of a combination of the object space error and the affine projection error , where and are, respectively, the first two and the third rows of the camera matrix , with ; is a 3D point in homogeneous coordinates, with ; is the 2D observation of the :th point on the :th camera; and represents the set of observable data. The Pseudo Object Space Error is then given by
(28) 
where balances the weight between the two errors. One of the main properties of pOSE is its wide basin of convergence while keeping a bilinear problem strucuture. The , originally designed for rigid structure from motion, can be extended for the nonrigid case by replacing by a linear combination of shape basis, i.e., , where and are structured as
(29) 
We denote by and the vertical and horizontal concatenations of and , respectively. Note that for , we have , which corresponds to the rigid case, and consequentely . For , by construction we have .
5.2 LowRank Matrix Recovery with pOSE errors
In this section we compare the performance of first and secondorder methods in terms of convergence and accuracy, starting from the same initial guess, for lowrank matrix recovery with pOSE. In this problem, we define and aim at minimizing
(30) 
We apply our method and solve the optimization problem defined in (27) by using the bilinear factorization , with , and , with . We test the performance of our method in 4 datasets: Door [29], Back [33], Heart [35], Paper [37]. The first one consists of image measurements of a rigid structure with missing data, while the remaining three datasets track points in deformable structures.
For the Door dataset, we apply two different selections of weights on the singular values of , corresponding to the nuclear norm, i.e., , and truncated nuclear norm, i.e., , and , . We select , and . For the Back, Heart and Paper datasets, we apply the nuclear norm and a weighted nuclear norm, in which the first four singular values of are not penalized and the remaining ones are increasingly penalized, i.e., , and , . We select , . The values of the weights are chosen such that there is a rank solution to (30), with and for the rigid and nonrigid datasets, respectively.
We compare the bilinear parameterization with three firstorder methods commonly used for lowrank matrix recovery: Alternating Direction Method of Multipliers (ADMM) [3], Iteratively Reweighted Nuclear Norm (IRNN) [10], and Accelerated Gradient Descend (AGD) [27]. We also test the methods for two different cases of the error, with and , which correspond to the nearperspective and nearaffine camera models, respectively. To improve numerical stability of the algorithms, as preprocessing step we normalize the image measurements matrix by its norm. The methods are initialized with the closedform solution of the regularizationfree problem, i.e., . The comparison of the four algorithms in terms of total logloss over time is shown in Figure 1. The logloss is used for better visualization purposes. The plots for the IRNN for the nuclear norm are omitted since it demonstrated slow convergence compared to the remaining three methods. A qualitative evaluation of the results on one of the images of the Door dataset for the truncated nuclear norm and near perspective camera model is shown in Figure 2. The qualitative results for the remaining datasets are provided in the supplementary material.
In general, we can observe that firstorder methods demonstrate faster initial convergence, mostly due to faster iterations. However when near minima, the convergence rate drops significantly and the methods tend to stall. Contrarily, bilinear parameterization compensates its slower iterations by demonstrating higher accuracy and and reaching solutions with lower energy. This is specially visible for the near perspective camera model, which reinforces the advantages of using a secondorder method on image data under perspective projection. To compensate for the slower convergence, we propose the initialization of the bilinear parameterization with the solution obtained with ADMM. In this way, the bilinear parameterization starts near the minimum and performs local refinement to further improve accuracy.









5.3 NonRigid Structure Recovery
Consider now that the camera rotations in (29) are known (or previously estimated). In this case we have , with and , where , the nonrigid structure, and are the unknowns. It is directly observed that , with the later being equal to 1 by construction and independent on . As consequence, it follows that , and the rank regularization can be applied on . A similar problem was studied in [11] but for orthogonal camera models, where the authors propose the rank regularization to be applied on a reshaped version of , a matrix structured as
(31) 
where , and are respectively the x, y and zcoordinates of the :th 3D point at frame . The function performs the permutation on the elements of to obtain . With this reshaping we have that , meaning that we can factorize it as with and . The optimization problem then becomes
(32) 
Solving this optimization problem requires small adjustments to be done to the proposed Algorithm 1, which can be consulted in the supplementary material. We apply our methods to the 5 datasets (Articulated, Balloon, Paper, Stretch, Tearing) from the NRSfM Challenge [22]. Each of these datasets include tracks of image points for orthogonal and perspective camera models for six different camera paths (Circle, Flyby, Line, Semicircle, Tricky, Zigzag), as well as the groundtruth 3D structure for one of the frames. We use the 2D observation for the orthogonal camera model to compute the rotation matrix , as done in [11], and the groundtruth 3D structure to estimate the intrinsic camera matrix, which is assumed to be fixed during each sequence. The intrinsic camera matrix is used to obtain the calibrated 2D observation of the perspective camera model data. For the nuclear norm (NN), we set . For the weighted nuclear norm (WNN), the weights are selected similarly to [24]
(33) 
where , is a small number for numerical stability, and is the closedform solution of the objective
For these datasets we choose and set the . As baseline we use the best performing firstorder method according to the experiments Section 5.2, ADMM, and apply the method described in Algorithm 1 for local refinement starting from the ADMM’s solution. We also try our method for the orthogonal camera model (by setting ), and compare it with BMM [11] and RBMM [24], which correspond to ADMM implementations for nuclear norm and weighted nuclear norm, respectively. These methods perform a best rank approximation to the obtained ADMM solution if after convergence. We let the ADMMbased methods run until convergence or stalling is achieved for fair comparison. The average loglosses, before and after refinement, obtained on each dataset over the 6 camera paths are shown in Table 1. The average reconstruction errors, in millimeters, on each dataset over the 6 camera paths relatively to the provided groundtruth structure are shown in Table 2. In Figure 3 we also show some qualitative results of the obtained 3D reconstruction of each of the objects in the 5 datasets. More qualitative results are provided in the supplementary material.
Method \Dataset  Articulated  Balloon  Paper  Stretch  Tearing  

Orthogonal  BMM [11]  1.645  2.267  1.712  2.282  1.453 
OursNN  1.800  2.352  2.188  2.509  1.634  
\cdashline27  RBMM [24]  1.648  1.979  1.855  1.997  1.522 
OursWNN  1.648  1.979  1.855  1.997  1.522  
Perspective  ADMMNN  2.221  2.529  2.338  2.395  1.471 
OursNN  2.415  2.657  2.560  2.622  2.053  
\cdashline27  ADMMWNN  2.455  2.617  2.195  2.651  1.688 
OursWNN  2.486  2.931  2.777  2.857  2.103 
Method \Dataset  Articulated  Balloon  Paper  Stretch  Tearing  

Orthogonal  BMM [11]  18.49  10.39  8.94  10.02  14.23 
OursNN  18.31  8.53  10.94  10.67  17.03  
RBMM [24]  16.00  7.84  10.69  7.53  16.34  
OursWNN  15.03  8.05  10.45  9.01  16.20  
Perspective  ADMMNN  16.70  8.05  7.96  6.04  9.40 
OursNN  16.13  6.48  6.80  6.00  9.31  
ADMMWNN  18.33  8.95  10.14  8.06  9.28  
OursWNN  16.53  6.27  5.68  5.93  8.42 
The results show that our method is able to achieve lower energies for all datasets comparatively with the ADMM baselines. As expected from the experiments in Section 5.2, the difference is more substantial for the perspective model. Furthermore, even though we are not explicitly minimizing the reconstruction error expressed in Table 2, we are able to consistently obtain the lowest reconstruction error for all datasets, sometimes with great improvements compared to the ADMM (see Balloon and Stretch in Figure 3). The same does not apply for the orthogonal data, where achieving lower energies did not lead to lower reconstruction errors.
6 Conclusions
In this paper we showed that it is possible to optimize a general class of singular value penalties using a bilinear parameterization of the matrix. We showed that with this parameterization weighted nuclear norm penalties turn in to smooth objectives that can be accurately solved with 2nd order methods. Our proposed approach starts by using ADMM which rapidly decreases the objective the first couple of iterations and switches to LevenbergMarquardt when ADMM iterations make little progress. This results in a much more accurate solution and we showed that we were able to extend the recently proposed pOSE [21] to handle nonrigid reconstruction problems.
While second order methods offer increased accuracy our current approach is expensive since iterations require the inversion of a large matrix. Exploring feasible alternatives such as preconditioning and conjugate gradient approaches is an interesting future direction.
Something that we have not discussed is adding constraints on the factors, which is possible since these are present in the optimization. This is very relevant for structure from motion problems and will likely be an fruitful direction to explore.
References
 Francis R. Bach. Convex relaxations of structured matrix factorizations. CoRR, abs/1309.3117, 2013.
 Ronen Basri, David Jacobs, and Ira Kemelmacher. Photometric stereo with general, unknown lighting. International Journal of Computer Vision, 72(3):239–257, May 2007.
 Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1–122, 2011.
 C. Bregler, A. Hertzmann, and H. Biermann. Recovering nonrigid 3d shape from image streams. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2000.
 A. M. Buchanan and A. W. Fitzgibbon. Damped newton algorithms for matrix factorization with missing data. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005.
 R. Cabral, F. De la Torre, J. P. Costeira, and A. Bernardino. Unifying nuclear norm and bilinear factorization approaches for lowrank matrix decomposition. In International Conference on Computer Vision (ICCV), 2013.
 Emmanuel J. Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? J. ACM, 58(3):11:1–11:37, 2011.
 Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009.
 Lu Canyi, Jinhui Tang, Shuicheng Yan, and Zhouchen Lin. Generalized nonconvex nonsmooth lowrank minimization. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
 Lu Canyi, Jinhui Tang, Shuicheng Yan, and Zhouchen Lin. Nonconvex nonsmooth lowrank minimization via iteratively reweighted nuclear norm. IEEE Transactions on Image Processing, 25, 10 2015.
 Yuchao Dai, Hongdong Li, and Mingyi He. A simple priorfree method for nonrigid structurefrommotion factorization. International Journal of Computer Vision, 107(2):101–122, 2014.
 Maryam Fazel, Haitham Hindi, and Stephen P Boyd. A rank minimization heuristic with application to minimum order system approximation. In American Control Conference, 2001.
 Ravi Garg, Anastasios Roussos, and Lourdes Agapito. A variational approach to video registration with subspace constraints. International Journal of Computer Vision, 104(3):286–314, 2013.
 Ravi Garg, Anastasios Roussos, and Lourdes de Agapito. Dense variational reconstruction of nonrigid surfaces from monocular video. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
 Shuhang Gu, Qi Xie, Deyu Meng, Wangmeng Zuo, Xiangchu Feng, and Lei Zhang. Weighted nuclear norm minimization and its applications to low level vision. International Journal of Computer Vision, 121, 07 2016.
 Benjamin D. Haeffele and René Vidal. Structured lowrank matrix factorization: Global optimality, algorithms, and applications. CoRR, abs/1708.07850, 2017.
 Je Hyeong Hong and Andrew Fitzgibbon. Secrets of matrix factorization: Approximations, numerics, manifold optimization and random restarts. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
 Je Hyeong Hong, Christopher Zach, Andrew Fitzgibbon, and Roberto Cipolla. Projective bundle adjustment from arbitrary initialization using the variable projection method. In European Conference on Computer Vision (ECCV), 2016.
 Je Hyeong Hong, Christopher Zach, Andrew Fitzgibbon, and Roberto Cipolla. Projective bundle adjustment from arbitrary initialization using the variable projection method. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
 Y. Hu, D. Zhang, J. Ye, X. Li, and X. He. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9):2117–2130, 2013.
 Je Hyeong Hong and Christopher Zach. pose: Pseudo object space error for initializationfree bundle adjustment. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
 Sebastian Hoppe Nesgaard Jensen, Alessio Del Bue, Mads Emil Brix Doest, and Henrik Aanæs. A benchmark and evaluation of nonrigid structure from motion, 2018.
 Mikhail Krechetov, Jakub Marecek, Yury Maximov, and Martin Takac. Entropypenalized semidefinite programming. In Proceedings of the TwentyEighth International Joint Conference on Artificial Intelligence, IJCAI19, pages 1123–1129. International Joint Conferences on Artificial Intelligence Organization, 7 2019.
 Suryansh Kumar. A simple priorfree method for nonrigid structurefrommotion factorization : Revisited. CoRR, abs/1902.10274, 2019.
 Viktor Larsson and Carl Olsson. Convex low rank approximation. International Journal of Computer Vision, 120(2):194–214, 2016.
 Adrian S Lewis. The convex analysis of unitarily invariant matrix functions. Journal of Convex Analysis, 2(1):173–183, 1995.
 Huan Li and Zhouchen Lin. Provable accelerated gradient method for nonconvex low rank optimization. Machine Learning, 06 2019.
 T. H. Oh, Y. W. Tai, J. C. Bazin, H. Kim, and I. S. Kweon. Partial sum minimization of singular values in robust pca: Algorithm and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(4):744–758, 2016.
 Carl Olsson and Olof Enqvist. Stable structure from motion for unordered image collections. In Proceedings of the Scandinavian Conference on Image Analysis (SCIA), pages 524–535, 2011.
 Marcus Valtonen Örnhag, Carl Olsson, and Anders Heyden. Bilinear parameterization for differentiable rankregularization. CoRR, abs/1811.11088, 2018.
 Samet Oymak, Karthik Mohan, Maryam Fazel, and Babak Hassibi. A simplified approach to recovery conditions for low rank matrices. In IEEE International Symposium on Information Theory Proceedings (ISIT), pages 2318–2322, 2011.
 Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471–501, August 2010.
 Chris Russell, Joao Fayad, and Lourdes Agapito. Energy based multiple model fitting for nonrigid structure from motion. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3009 – 3016, 07 2011.
 F. Shang, J. Cheng, Y. Liu, Z. Luo, and Z. Lin. Bilinear factor matrix norm minimization for robust pca: Algorithms and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(9):2066–2080, Sep. 2018.
 Danail Stoyanov, George P. Mylonas, Fani Deligianni, Ara Darzi, and Guang Zhong Yang. Softtissue motion tracking and structure estimation for robotic assisted mis procedures. In James S. Duncan and Guido Gerig, editors, Medical Image Computing and ComputerAssisted Intervention – MICCAI 2005, pages 139–146, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg.
 Carlo Tomasi and Takeo Kanade. Shape and motion from image streams under orthography: A factorization method. International Journal of Computer Vision, 9(2):137–154, 1992.
 Aydin Varol, Mathieu Salzmann, Engin Tola, and Pascal Fua. Templatefree monocular reconstruction of deformable surfaces. In International Conference on Computer Vision (ICCV), pages 1811 – 1818, 11 2009.
 Naiyan Wang, Tiansheng Yao, Jingdong Wang, and DitYan Yeung. A probabilistic approach to robust matrix factorization. In European Conferencec on Computer Vision (ECCV), 2012.
 Chen Xu, Zhouchen Lin, and Hongbin Zha. A unified convex surrogate for the schattenp norm. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2017.
 Jingyu Yan and M. Pollefeys. A factorizationbased approach for articulated nonrigid shape, motion and kinematic chain recovery from video. IEEE Trans. Pattern Anal. Mach. Intell., 30(5):865–877, 2008.