On the generalized shift-splitting preconditioner for saddle point problems
Abstract. In this paper, the generalized shift-splitting preconditioner is implemented for saddle point problems with symmetric positive definite (1,1)-block and symmetric positive semidefinite (2,2)-block. The proposed preconditioner is extracted form a stationary iterative method which is unconditionally convergent. Moreover, a relaxed version of the proposed preconditioner is presented and some properties of the eigenvalues distribution of the corresponding preconditioned matrix are studied. Finally, some numerical experiments on test problems arisen from finite element discretization of the Stokes problem are given to show the effectiveness of the preconditioners.
Keywords: Saddle point problem, preconditioner, shift-splitting, symmetric positive definite.
AMS Subject Classification: 65F10, 65F50, 65N22.
Consider the saddle point linear system
where is symmetric positive definite (SPD), is symmetric positive semidefinite and , , is of full rank. Moreover, and . We also assume that the matrices , and are large and sparse. According to Lemma 1.1 in  the matrix is nonsingular. Such systems arise in a variety of scientific computing and engineering applications, including constrained optimization, computational fluid dynamics, mixed finite element discretization of the Navier-Stokes equations, etc. (see [1, 10, 14, 25]). Application-based analysis can be seen in [22, 26, 28].
In the last decade, there has been intensive work on development of the effective iterative methods for solving matrix equations with different structures (see for example [4, 18, 19, 20, 21, 27]). Benzi and Golub  investigated the convergence and the preconditioning properties of the Hermitian and skew-Hermitian splitting (HSS) iterative method , when it is used for solving the saddle point problems. Bai et al. in  established the preconditioned HSS (PHSS) iterative method, which involves a single parameter, and then, Bai and Golub in  proposed its two-parameter acceleration, called the accelerated Hermitian and skew-Hermitian splitting (AHSS) iterative method; see also . Besides these HSS methods, Uzawa-type schemes [6, 7, 13, 16, 23] and preconditioned Krylov subspace methods, such as MINRES and GMRES incorporated with suitable preconditioners have also been applied to solve the saddle point problems (see [29, 30, 31, 32] and the references therein as well as [11, 12]). The reader is also referred to  for a comprehensive survey.
which is a skillful generalization of the idea of the shift-splitting preconditioner initially introduced in  for solving a non-Hermitian positive definite linear system where and is the identity matrix. Recently, Chen and Ma in  studied the two-parameter generalization of the preconditioner , say
for solving the saddle point linear systems (1) with , where and .
In this paper, we propose a modified generalized shift-splitting (MGSS) preconditioner for the saddle point problem (1) with . The MGSS preconditioner is based on a splitting of the saddle point matrix which results in an unconditionally convergent stationary iterative method. Moreover, a relaxed version of the MGSS preconditioner is presented and the eigenvalues distribution of the corresponding preconditioned matrix is studied.
2 The generalized shift-splitting preconditioner
Let . Consider the splitting , where
This splitting leads to the following stationary iterative method (the MGSS iterative scheme) (1)
for solving the linear system (1), where is an initial guess. Therefore, the iteration matrix of the MGSS iterative method is given by . In the sequel, the convergence of the proposed method is studied. It is well known that the iterative method (3) is convergent for every initial guess if and only if , where denotes the spectral radius of (see ). Let be an eigenvector corresponding to the eigenvalue of . Then, we have or equivalently
Let . If is an eigenvalue of the matrix , then .
Let be an eigenvalue of the matrix and . Then .
Without loss of generality let . Multiplying both sides of (4) by yields
We consider two cases and . If , then Eq. (6) implies
We now assume that . In this case, from Eq. (5) we obtain
Letting , , and , it follows from the latter equation that
Since and , form (8) we see that
Hence, we have
which completes the proof. ∎
Theorem 1 guarantees the convergence of the MGSS method, however the stationary iterative method (3) is typically too slow for the method to be competitive. Nevertheless, it serves the preconditioner for a Krylov subspace method such as GMRES, or its restarted version GMRES() to solve system (1). At each step of the MGSS iterative method or applying the shift-splitting preconditioner within a Krylov subspace method, we need to compute a vector of the form for a given where and . It is not difficult to check that
where . Hence,
Computation of .
Solve for .
Solve for .
Solve for .
Obviously, the matrix is SPD. In practical implementation of Algorithm 1 one may use the conjugate gradient (CG) method or a preconditioned CG (PCG) method to solve the system of Step 3. It is noted that, since the matrix is SPD and of small size in comparison to the size of , we use the Cholesky factorization of in Steps 1, 3 and 4.
In the sequel, we consider the relaxed MGSS (RMGSS) preconditioner
for the saddle point problem (1). The next theorem discusses eigenvalues distribution of .
The preconditioned matrix has an eigenvalue 1 with multiplicity and the remaining eigenvalues are , , where ’s are the eigenvalues of the matrix .
By using Eq. (9) (with and neglecting the pre-factor 2) we obtain
where . Therefore, has an eigenvalue 1 with multiplicity and the remaining eigenvalues are the eigenvalues of and this completes the proof. ∎
Similar to Theorem 3.2 in  it can be shown that the dimension of the Krylov subspace is at most . This shows that the GMRES iterative method to solve (1) in conjunction with the preconditioner terminates in most iterations and provides the exact solution of the system (see Proposition 6.2 in ). Obviously, the matrix is SPD and as a result its eigenvalues are positive. Therefore, from Theorem 2 we see that or . Hence, the eigenvalues of would be well clustered with a nice clustering of its eigenvalues around the point for small values of . In this case, the matrix would be well conditioned.
3 Numerical Experiments
In this section, some numerical experiments are given to show the effectiveness of the MGSS and RMGSS preconditioners. All the numerical experiments presented in this section were computed in double precision using some MATLAB codes on a Laptop with Intel Core i7 CPU 1.8 GHz, 6GB RAM. We consider the Stokes problem (see [25, page 221])
in , with the exact solution
We use the interpolant of u for specifying Dirichlet conditions everywhere on the boundary. The test problems were generated by using the IFISS software package written by Elman et al. [prec1]. The IFISS package were used to discretize the problem (10) using stabilized Q1-P0 finite elements. We used as the stabilization parameter. Matrix properties of the test problem for different sizes are given in Table 1.
We use GMRES() in conjunction with the preconditioners and . We also compare the results of the MGSS and RMGSS preconditioners with those of the Hermitian and skew-Hermitian (HSS) preconditioner (see ). To show the effectiveness of the methods we also give the results of GMRES(5) without preconditioning. We use a null vector as an initial guess and the stopping criterion . In the implementation of the preconditioners and , in Algorithm 1, we use the Cholesky factorization of and the CG method to solve the system of Step 3. It is noted that, in the CG method, the iteration is terminated when the residual norm is reduced by a factor of 100 or when the number of iterations exceeds 40. Numerical results are given in Table 2 for different sizes of the problem. In this table “IT” denotes for the number of iterations for the convergence and “CPU” stands for the corresponding CPU times (in seconds). For the HSS preconditioner we experimentally computed the optimal value of the involving parameter (see ) of the method. For the MGSS method we present the numerical results for and and in the RMGSS method for . As the numerical results show the all the preconditioners are effective. We also observe that the MGSS and RMGSS preconditioners are superior to the HSS preconditioners in terms of both iteration count and CPU times. For more investigation the eigenvalues distribution of the matrices , with and with are displayed in Figure 1. As we see the eigenvalues of and are more clustered than the matrix .
We have presented a modification of the generalized shift-splitting method to solve the saddle point problem with symmetric positive definite (1,1)-block and symmetric positive semidefinite (2,2)-block. Then the resulted preconditioner and its relaxed version have been implemented to precondition the saddle point problem. We have seen that both of the preconditioners are effective when they are combined with the GMRES(m) algorithm. Our numerical results show that the proposed preconditioners are more effective than the HSS preconditioner.
The authors are grateful to the anonymous referees for their valuable comments and suggestions which improved the quality of this paper.
-  O. Axelsson and V.A. Barker, Finite Element Solution of Boundary Value Problems, Academic Press, Orlando, FL, 1984.
-  Z. Z. Bai, Optimal parameters in the HSS-like methods for saddle-point problems, Numer. Linear Algebra Appl. 16 (2009) 447-479.
-  Z. Z. Bai and G. H. Golub, Accelerated Hermitian and skew-Hermitian splitting methods for saddle-point problems, IMA J. Numer. Anal. 27 (2007) 1-23.
-  Z. Z. Bai, G. H. Golub and M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603-626.
-  Z. Z. Bai, G. H. Golub and J. Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numer. Math. 98 (2004) 1-32.
-  Z. Z. Bai, B. Parlett and Z. Q. Wang, On generalized successive overrelaxation methods for augmented linear systems, Numer. Math. 102 (2005) 1-38.
-  Z. Z. Bai and Z. Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl. 428 (2008) 2900-2932.
-  Z. Z. Bai, J. F. Yin and Y. F. Su, A shift-splitting preconditioner for non-Hermitian positive definite matrics, J. Comput. Math. 24 (2006) 539-552.
-  M. Benzi and G.H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl. 26 (2004) 20-41.
-  M. Benzi, G.H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005) 1-137.
-  M. Benzi and X.P. Guo, A dimensional split preconditioner for Stokes and linearized Navier-Stokes equations, Appl. Numer. Math. 61 (2011) 66-76.
-  M. Benzi, M. K. Ng, Q. Niu and Z. Wang. A relaxed dimensional fractorization preconditioner for the incompressible Navier-Stokes equations, J. Comput. Phys. 230 (2011) 6185-6202.
-  J. H. Bramble, J. E. Pasciak, and A. T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34 (1997) 1072-1092.
-  F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods, Springer, New York, 1991.
-  Y. Cao, J. Du and Q. Niu, Shift-splitting preconditioners for saddle point problems, J. Comput. Appl. Math. 272 (2014) 239-250.
-  Y. Cao, M. Q. Jiang and Y. L. Zheng, A splitting preconditioner for saddle point problems, Numer. Linear Algebra Appl. 18 (2011) 875-895.
-  C. Chen and C. Ma, A generalized shift-splitting preconditioner for saddle point problems, Appl. Math. Lett. 43 (2015) 49-55.
-  F. Ding, P. X. Liu and J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Appl. Math. Comput. 197 (2008) 41-50.
-  F. Ding and T. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Trans. Automat. Control. 50(2005) 1216-1221.
-  F. Ding and T. Chen, On iterative solutions of general coupled matrix equations, SIAM J. Control Optim. 44(2006) 2269-2284.
-  F. Ding and H. Zhang, Gradient-based iterative algorithm for a class of the coupled matrix equations related to control systems, IET Control Theory Appl. 8(2014) 1588-1595.
-  H. Elman, Multigrid and Krylov subspace methods for the discrete Stokes equations, Internat. J. Numer. Methods Fluids. 22 (1996) 755-770.
-  H. C. Elman and G. H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31 (1994) 1645-1661.
-  H.C. Elman, A. Ramage, D.J. Silvester, IFISS: A Matlab toolbox for modelling incompressible flow, ACM Trans. Math. Software. 33 (2007), Article 14.
-  H. C. Elman, D. J. Silvester and A.J. Wathen, Finite Elements and Fast Iterative Solvers, Oxford University Press, Oxford, 2003.
-  A. Klawonn, Block-triangular preconditioners for saddle-point problems with a penalty term, SIAM J. Sci. Comput. 19 (1998) 172-184.
-  Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, 2003.
-  D. Silvester and A. Wathen, Fast iterative solution of stabilized Stokes systems, Part II: Using general block preconditioners, SIAM J. Numer. Anal. 31 (1994) 1352-1367.
-  S.-L. Wu, T.-Z. Huang and C.-X. Li, Generalized block triangular preconditioner for symmetric saddle point problems, Computing. 84 (2009) 183-208.
-  S.-L. Wu, T.-Z. Huang and C.-X. Li, Modified block preconditioners for the discretized time-harmonic Maxwell equations in mixed form, J. Comput. Appl. Math. 237 (2013) 419-431.
-  S.-L. Wu, L. Bergamaschi and C.-X. Li, A note on eigenvalue distribution of constraint-preconditioned symmetric saddle point matrices, Numer. Linear Algebra Appl. 21 (2014) 171-174.
-  S.-L. Wu and C.-X. Li, Eigenvalue estimates of an indefinite block triangular preconditioner for saddle point problem, J. Comput. Appl. Math. 260 (2014) 349-355.