Optimization Methods for Sparse Pseudo-Likelihood Graphical Model Selection
Sparse high dimensional graphical model selection is a popular topic in contemporary machine learning. To this end, various useful approaches have been proposed in the context of -penalized estimation in the Gaussian framework. Though many of these inverse covariance estimation approaches are demonstrably scalable and have leveraged recent advances in convex optimization, they still depend on the Gaussian functional form. To address this gap, a convex pseudo-likelihood based partial correlation graph estimation method (CONCORD) has been recently proposed. This method uses coordinate-wise minimization of a regression based pseudo-likelihood, and has been shown to have robust model selection properties in comparison with the Gaussian approach. In direct contrast to the parallel work in the Gaussian setting however, this new convex pseudo-likelihood framework has not leveraged the extensive array of methods that have been proposed in the machine learning literature for convex optimization. In this paper, we address this crucial gap by proposing two proximal gradient methods (CONCORD-ISTA and CONCORD-FISTA) for performing -regularized inverse covariance matrix estimation in the pseudo-likelihood framework. We present timing comparisons with coordinate-wise minimization and demonstrate that our approach yields tremendous payoffs for -penalized partial correlation graph estimation outside the Gaussian setting, thus yielding the fastest and most scalable approach for such problems. We undertake a theoretical analysis of our approach and rigorously demonstrate convergence, and also derive rates thereof.
Optimization Methods for Sparse Pseudo-Likelihood Graphical Model Selection
Sang-Yun Oh Computational Research Division Lawrence Berkeley National Lab firstname.lastname@example.org Onkar Dalal Stanford University email@example.com Kshitij Khare Department of Statistics University of Florida firstname.lastname@example.org Bala Rajaratnam Department of Statistics Stanford University email@example.com
Sparse inverse covariance estimation has received tremendous attention in the machine learning, statistics and optimization communities. These sparse models, popularly known as graphical models, have widespread use in various applications, especially in high dimensional settings. The most popular inverse covariance estimation framework is arguably the -penalized Gaussian likelihood optimization framework as given by
where denotes the space of -dimensional positive definite matrices, and -penalty is imposed on the elements of by the term along with the scaling factor . The matrix denotes the sample covariance matrix of the data . As the -penalized log likelihood is convex, the problem becomes more tractable and has benefited from advances in convex optimization. Recent efforts in the literature on Gaussian graphical models therefore have focused on developing principled methods which are increasingly more and more scalable. The literature on this topic is simply enormous and for the sake of brevity, space constraints and the topic of this paper, we avoid an extensive literature review by referring to the references in the seminal work of Banerjee et al. (2008) and the very recent work of Dalal and Rajaratnam (2014). These two papers contain references to recent work, including past NIPS conference proceedings.
1.2 The CONCORD method
Despite their tremendous contributions, one shortcoming of the traditional approaches to -penalized likelihood maximization is the restriction to the Gaussian assumption. To address this gap, a number of -penalized pseudo-likelihood approaches have been proposed: SPACE Peng et al. (2009) and SPLICE Rocha et al. (2008), SYMLASSO Friedman et al. (2010). These approaches are either not convex, and/or convergence of corresponding maximization algorithms are not established. In this sense, non-Gaussian partial correlation graph estimation methods have lagged severely behind, despite the tremendous need to move beyond the Gaussian framework for obvious practical reasons. In very recent work, a convex pseudo-likelihood approach with good model selection properties called CONCORD Khare et al. (2014) was proposed. The CONCORD algorithm minimizes
via cyclic coordinate-wise descent that alternates between updating off-diagonal elements and diagonal elements. It is straightforward to show that operators for updating (holding constant) and for updating (holding constant) are given by
This coordinate-wise algorithm is shown to converge to a global minima though no rate is given (Khare et al., 2014). Note that the equivalent problem assuming a Gaussian likelihood has seen much development in the last ten years, but a parallel development for the recently introduced CONCORD framework is lacking for obvious reasons. We address this important gap by proposing state-of-the-art proximal gradient techniques to minimize . A rigorous theoretical analysis of the pseudo-likelihood framework and the associated proximal gradient methods which are proposed is undertaken. We establish rates of convergence and also demonstrate that our approach can lead to massive computational speed-ups, thus yielding extremely fast and principled solvers for the sparse inverse covariance estimation problem outside the Gaussian setting.
2 CONCORD using proximal gradient methods
The penalized matrix version the CONCORD objective function in (1) is given by
where and denote the diagonal and off-diagonal elements of . We will use the notation to split any matrix into its diagonal and off-diagonal terms.
This section proposes a scalable and thorough approach to solving the CONCORD objective function using recent advances in convex optimization and derives rates of convergence for such algorithms. In particular, we use proximal gradient-based methods to achieve this goal and demonstrate the efficacy of such methods for the non-Gaussian graphical modeling problem. First, we propose CONCORD-ISTA and CONCORD-FISTA in section 2.1: methods which are inspired by the iterative soft-thresholding algorithms in Beck and Teboulle (2009). We undertake a comprehensive treatment of the CONCORD optimization problem by also investigating the dual of the CONCORD problem. Other popular methods in the literature, including the potential use of alternating minimization algorithm and the second order proximal Newtonâs method CONCORD-PNOPT, are considered in Supplemental section A.5.
2.1 Iterative Soft Thresholding Algorithms: CONCORD-ISTA, CONCORD-FISTA
The iterative soft-thresholding algorithms (ISTA) have recently gained popularity after the seminal paper by Beck and Teboulle Beck and Teboulle (2009). The ISTA methods are based on the Forward-Backward Splitting method from Rockafellar (1976) and Nesterov’s accelerated gradient methods Nesterov (1983) using soft-thresholding as the proximal operator for the -norm. The essence of the proximal gradient algorithms is to divide the objective function into a smooth part and a non-smooth part, then take a proximal step (w.r.t. the non-smooth part) in the negative gradient direction of the smooth part. Nesterov’s accelerated gradient extension Nesterov (1983) uses a combination of gradient and momentum steps to achieve accelerated rates of convergence. In this section, we apply these methods in the context of CONCORD which also has a composite objective function.
The matrix CONCORD objective function (4) can be split into a smooth part and a non-smooth part :
The gradient and hessian of the smooth function are given by
where is a column vector of zeros except for a one in the -th position.
The proximal operator for the non-smooth function is given by element-wise soft-thresholding operator as
where is a matrix with diagonal and for each off-diagonal entry. The details of the proximal gradient algorithm CONCORD-ISTA are given in Algorithm 1, and the details of the accelerated proximal gradient algorithm CONCORD-FISTA are given in Algorithm 2.
2.2 Choice of step size
In the absence of a good estimate of the Lipschitz constant , the step size for each iteration of CONCORD-ISTA and CONCORD-FISTA is chosen using backtracking line search. The line search for iteration starts with an initial step size and reduces the step with a constant factor until the new iterate satisfies the sufficient descent condition:
In section 4, we have implemented algorithms choosing the initial step size in three different ways: (a) a constant starting step size (=1), (b) the feasible step size from the previous iteration , (c) the step size heuristic of Barzilai-Borwein. The Barzilai-Borwein heuristic step size is given by
This is an approximation of the secant equation which works as a proxy for second order information using successive gradients (see Barzilai and Borwein (1988) for details).
2.3 Computational complexity
After the one time calculation of , the most significant computation for each iteration in CONCORD-ISTA and CONCORD-FISTA algorithms is the matrix-matrix multiplication in the gradient term. If is the number of non-zeros in , then can be computed using operations if we exploit the extreme sparsity in . The second matrix-matrix multiplication for the term can be computed efficiently using over the set of non-zero ’s. This computation only requires operations. The remaining computations are all at the element level which can be completed in operations. Therefore, the overall computational complexity for each iteration reduces to . On the other hand, the proximal gradient algorithms for the Gaussian framework require inversion of a full matrix which is non-parallelizable and requires operations. The coordinate-wise method for optimizing CONCORD in Khare et al. (2014) also requires cycling through the entries of in specified order and thus does not allow parallelization. In contrast, CONCORD-ISTA and CONCORD-FISTA can use ‘perfectly parallel’ implementations to distribute the above matrix-matrix multiplications. At no step do we need to keep all of the dense matrices on a single machine. Therefore, CONCORD-ISTA and CONCORD-FISTA are scalable to any high dimensions restricted only by the number of machines.
3 Convergence Analysis
In this section, we prove convergence of CONCORD-ISTA and CONCORD-FISTA methods along with their respective convergence rates of and . We would like to point out that, although the authors in Khare et al. (2014) provide a proof of convergence for their coordinate-wise minimization algorithm for CONCORD, they do not provide any rates of convergence. The arguments for convergence leverage the results in Beck and Teboulle (2009) but require some essential ingredients. We begin with proving lower and upper bounds on the diagonal entries for belonging to a level set of . The lower bound on the diagonal entries of establishes Lipschitz continuity of the gradient based on the hessian of the smooth function as stated in (6). The proof for the lower bound uses the existence of an upper bound on the diagonal entries. Hence, we prove both bounds on the diagonal entries. We begin by defining a level set of the objective function starting with an arbitrary initial point with a finite function value as
For the positive semidefinite matrix , let denote times the upper triangular matrix from the LU decomposition of , such that (the factor simplifies further arithmetic). Assuming the diagonal entries of to be strictly nonzero (if , then the component can be ignored upfront since it has zero variance and is equal to a constant for every data point), we have at least one such that for every . Using this, we prove the following theorem bounding the diagonal entries of .
For any symmetric matrix satisfying , the diagonal elements of are bounded above and below by constants which depend only on , and . In other words,
for some constants and .
(a) Upper bound: Suppose . Then, we have
Considering only the entry in the Frobenious norm term and the column penalty in the third term we get
Now, suppose and . Then
where . Going back to the inequality (12), for , we have
Here, if , then using the first inequality (13), and if , then using the second inequality (14). In either cases, the functions and are unbounded as . Hence, the upper bound of on these functions guarantee an upper bound such that . Therefore, for all .
(b) Lower bound: By positivity of the trace term and the term (for off-diagonals), we have
The negative log function is a convex function with a lower bound at with . Therefore, for any , we have
Simplifying the above equation, we get the lower bound on the diagonal entries . More specifically,
Therefore, serves as a lower bound for all . ∎
Given that the function values are non-increasing along the iterates of Algorithms 1, 2 and 3, the sequence of satisfy for . The lower bounds on the diagonal elements of provides the Lipschitz continuity using
Therefore, using the mean-value theorem, the gradient satisfies
with the Lipschitz continuity constant . The remaining argument for convergence follows from the theorems in Beck and Teboulle (2009).
Hence, CONCORD-ISTA and CONCORD-FISTA converge at the rates of and for the iteration.
4 Implementation & Numerical Experiments
In this section, we outline algorithm implementation details and present results of our comprehensive numerical evaluation. Section 4.1 gives performance comparisons from using synthetic multivariate Gaussian datasets. These datasets are generated from a wide range of sample sizes () and dimensionality (). Additionally, convergence of CONCORD-ISTA and CONCORD-FISTA will be illustrated. Section 4.2 has timing results from analyzing a real breast cancer dataset with outliers. Comparisons are made to the coordinate-wise CONCORD implementation in gconcord package for R available at http://cran.r-project.org/web/packages/gconcord/.
For implementing the proposed algorithms, we can take advantage of existing linear algebra libraries. Most of the numerical computations in Algorithms 1 and 2 are linear algebra operations, and, unlike the sequential coordinate-wise CONCORD algorithm, CONCORD-ISTA and CONCORD-FISTA implementations can solve increasingly larger problems as more and more scalable and efficient linear algebra libraries are made available. For this work, we opted to using Eigen library (Guennebaud, Jacob, et al., 2010) for its sparse linear algebra routines written in C++. Algorithms 1 and 2 were also written in C++ then interfaced to R for testing. Table 3 gives names for various CONCORD-ISTA and CONCORD-FISTA versions using different initial step size choices.
4.1 Synthetic Datasets
Synthetic datasets were generated from true sparse positive random matrices of three sizes: . Instances of random matrices used here consist of 4995, 14985 and 24975 non-zeros, corresponding to 1%, 0.33% and 0.20% edge densities, respectively. For each , three random samples of sizes were used as inputs. The initial guess, , and the convergence criteria was matched to those of coordinate-wise CONCORD implementation. Highlights of the results are summarized below, and the complete set of comparisons are given in Supplementary materials Section A.
For synthetic datasets, our experiments indicate that two variations of the CONCORD-ISTA method show little performance difference. However, ccista_0 was marginally faster in our tests. On the other hand, ccfista_1 variation of CONCORD-FISTA that uses as initial step size was significantly faster than ccfista_0. Table 3 gives actual running times for the two best performing algorithms, ccista_0 and ccfista_1, against the coordinate-wise concord. As and increase ccista_0 performs very well. For smaller and , coordinate-wise concord performs well (more in Supplemental section A). This can be attributed to computational complexity of coordinate-wise CONCORD (Khare et al., 2014), and the sparse linear algebra routines used in CONCORD-ISTA and CONCORD-FISTA implementations slowing down as the number of non-zero elements in increases. On the other hand, for large fraction (), the proposed methods ccista_0 and ccfista_1 are significantly faster than coordinate-wise concord. In particular, when and , the speed-up of ccista_0 can be as much as 150 times over coordinate-wise concord.
Convergence behavior of CONCORD-ISTA and CONCORD-FISTA methods is shown in Figure 1. The best performing algorithms ccista_0 and ccfista_1 are shown. The vertical axis is the subgradient (See Algorithms 1, 2). Plots show that ccista_0 seems to converge at a constant rate much faster than ccfista_1 that appears to slow down after a few initial iterations. While the theoretical convergence results from section 3 prove convergence rates of and for CONCORD-ISTA and CONCORD-FISTA, in practice, ccista_0 with constant step size performed the fastest for the tests in this section.
|Variation||Method||Initial step size|
4.2 Real Data
Real datasets arising from various physical and biological sciences often are not multivariate Gaussian and can have outliers. Hence, convergence characteristic may be different on such datasets. In this section, the performance of proposed methods are assessed on a breast cancer dataset (Chang et al., 2005). This dataset contains expression levels of 24481 genes on 266 patients with breast cancer. Following the approach in Khare et al. Khare et al. (2014), the number of genes are reduced by utilizing clinical information that is provided together with the microarray expression dataset. In particular, survival analysis via univariate Cox regression with patient survival times is used to select a subset of genes closely associated with breast cancer. A choice of p-value yields a reduced dataset with genes.
Often times, graphical model selection algorithms are applied in a non-Gaussian and setting such as the case here. In this setting, coordinate-wise CONCORD algorithm is especially fast due to its computational complexity . However, even in this setting, the newly proposed methods ccista_0, ccista_1, and ccfista_1 perform competitively to, or often better than, concord as illustrated in Table 3. On this real dataset, ccista_1 performed the fastest whereas ccista_0 was the fastest on synthetic datasets.
The Gaussian graphical model estimation or inverse covariance estimation has seen tremendous advances in the past few years. In this paper we propose using proximal gradient methods to solve the general non-Gaussian sparse inverse covariance estimation problem. Rates of convergence were established for the CONCORD-ISTA and CONCORD-FISTA algorithms. Coordinate-wise minimization has been the standard approach to this problem thus far, and we provide numerical results comparing CONCORD-ISTA/FISTA and coordinate-wise minimization. We demonstrate that CONCORD-ISTA outperforms coordinate-wise in general, and in high dimensional settings CONCORD-ISTA can outperform coordinate-wise optimization by orders of magnitude. The methodology is also tested on real data sets. We undertake a comprehensive treatment of the problem by also examining the dual formulation and consider methods to maximize the dual objective. We note that efforts similar to ours for the Gaussian case has appeared in not one, but several NIPS and other publications. Our approach on the other hand gives a complete and thorough treatment of the non-Gaussian partial correlation graph estimation problem, all in this one self-contained paper.
- Banerjee et al.  Onureena Banerjee, Laurent El Ghaoui, and Alexandre DâAspremont. Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data. JMLR, 9:485–516, 2008.
- Dalal and Rajaratnam  Onkar Anant Dalal and Bala Rajaratnam. G-ama: Sparse gaussian graphical model estimation via alternating minimization. arXiv preprint arXiv:1405.3034, 2014.
- Peng et al.  Jie Peng, Pei Wang, Nengfeng Zhou, and Ji Zhu. Partial Correlation Estimation by Joint Sparse Regression Models. Journal of the American Statistical Association, 104(486):735–746, June 2009.
- Rocha et al.  Guilherme V Rocha, Peng Zhao, and Bin Yu. A path following algorithm for Sparse Pseudo-Likelihood Inverse Covariance Estimation (SPLICE). Technical Report 60628102, 2008.
- Friedman et al.  Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Applications of the lasso and grouped lasso to the estimation of sparse graphical models. Technical report, 2010.
- Khare et al.  Kshitij Khare, Sang-Yun Oh, and Bala Rajaratnam. A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees. to appear in Journal of the Royal Statistical Society: Series B, 2014.
- Beck and Teboulle  Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
- Rockafellar  R.T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization, 14(5):877–898, 1976.
- Nesterov  Yurii Nesterov. A method of solving a convex programming problem with convergence rate . In Soviet Mathematics Doklady, volume 27, pages 372–376, 1983.
- Barzilai and Borwein  J. Barzilai and J.M. Borwein. Two-point step size gradient methods. IMA Journal of Numerical Analysis, 8(1):141–148, 1988.
- Guennebaud et al.  Gaël Guennebaud, Benoît Jacob, et al. Eigen v3. http://eigen.tuxfamily.org, 2010.
- Chang et al.  Howard Y Chang, Dimitry S A Nuyten, Julie B Sneddon, Trevor Hastie, Robert Tibshirani, Therese Sø rlie, Hongyue Dai, Yudong D He, Laura J van’t Veer, Harry Bartelink, Matt van de Rijn, Patrick O Brown, and Marc J van de Vijver. Robustness, scalability, and integration of a wound-response gene expression signature in predicting breast cancer survival. Proceedings of the National Academy of Sciences of the United States of America, 102(10):3738–43, March 2005.
Appendix A Timing comparison
a.1 Median Speed-up
|Relative to concord|
|1000||250||0.6 ( 0.7)||0.4 ( 0.3)|
|1000||750||3.4 ( 1.8)||1.9 ( 0.9)|
|1000||1250||23.1 ( 5.7)||12.0 ( 3.5)|
|3000||750||2.7 ( 2.1)||1.9 ( 1.6)|
|3000||2250||12.8 ( 1.6)||8.8 ( 2.2)|
|3000||3750||81.9 ( 6.6)||58.2 ( 8.7)|
|5000||1250||5.6 ( 3.2)||3.0 ( 1.8)|
|5000||3750||21.1 ( 2.6)||13.5 ( 2.6)|
|5000||6250||145.8 ( 6.6)||110.1 (16.4)|
a.2 Comparison among CONCORD-ISTA and CONCORD-FISTA variations
a.3 Comparison with CONCORD algorithm
a.4 Running times
a.5 Other Methods
a.5.1 Dual problem of CONCORD
Formulating the dual using the matrix form is challenging since the KKT conditions involving the gradient term do not have a closed form solution as in the case of Gaussian problem in Dalal and Rajaratnam . Therefore, we consider a vector form of the CONCORD problem by defining two new variables and as
We define two coefficient matrices as
where and dimensional matrices. Using these definitions, the CONCORD problem (4) can be rewritten as
where, . We will use for simplicity of notation where ever possible.
The transformed CONCORD problem in (23) can be written in composite form using a new variable as
The Lagrangian for this problem is given by
Maximizing with respect to the three primal variables yields following optimality conditions (the . notation is adapted from MATLAB to denote element-wise operations),
Substituting these the dual problem can be written as
where, is a constant. This problem can also be written in composite form as
The gradient and hessian of the smooth function is given by
Here, the hessian is bounded away from the semi-definite boundary. Hence the function is strongly convex with parameter . Moreover, on lines of Theorem 3.1, we can show that if is restricted to a convex level set for some constant , then the function has a Lipschitz continuous gradient. Note that
Therefore, the hessian satisfies
To conclude, the dual problem provides an alternate method to prove the and rates of convergence for CONCORD problem.
a.5.2 Proximal Newton’s Algorithm for CONCORD
Recall that the hessian of the smooth function as given in 6 is
The subproblem solved for the direction of descent for the second order PNOPT algorithm is given by
Using these, the matrix version of the second order algorithm is given in Algorithm 3. Here, the subproblem for the descent step is as a huge Lasso problem. This can be solved by standard Lasso packages which uses coordinate descent methods.