On the generalized shiftsplitting preconditioner for saddle point problems
Abstract. In this paper, the generalized shiftsplitting preconditioner is implemented for saddle point problems with symmetric positive definite (1,1)block and symmetric positive semidefinite (2,2)block. The proposed preconditioner is extracted form a stationary iterative method which is unconditionally convergent. Moreover, a relaxed version of the proposed preconditioner is presented and some properties of the eigenvalues distribution of the corresponding preconditioned matrix are studied. Finally, some numerical experiments on test problems arisen from finite element discretization of the Stokes problem are given to show the effectiveness of the preconditioners.
Keywords: Saddle point problem, preconditioner, shiftsplitting, symmetric positive definite.
AMS Subject Classification: 65F10, 65F50, 65N22.
1 Introduction
Consider the saddle point linear system
(1) 
where is symmetric positive definite (SPD), is symmetric positive semidefinite and , , is of full rank. Moreover, and . We also assume that the matrices , and are large and sparse. According to Lemma 1.1 in [9] the matrix is nonsingular. Such systems arise in a variety of scientific computing and engineering applications, including constrained optimization, computational fluid dynamics, mixed finite element discretization of the NavierStokes equations, etc. (see [1, 10, 14, 25]). Applicationbased analysis can be seen in [22, 26, 28].
In the last decade, there has been intensive work on development of the effective iterative methods for solving matrix equations with different structures (see for example [4, 18, 19, 20, 21, 27]). Benzi and Golub [9] investigated the convergence and the preconditioning properties of the Hermitian and skewHermitian splitting (HSS) iterative method [4], when it is used for solving the saddle point problems. Bai et al. in [5] established the preconditioned HSS (PHSS) iterative method, which involves a single parameter, and then, Bai and Golub in [3] proposed its twoparameter acceleration, called the accelerated Hermitian and skewHermitian splitting (AHSS) iterative method; see also [2]. Besides these HSS methods, Uzawatype schemes [6, 7, 13, 16, 23] and preconditioned Krylov subspace methods, such as MINRES and GMRES incorporated with suitable preconditioners have also been applied to solve the saddle point problems (see [29, 30, 31, 32] and the references therein as well as [11, 12]). The reader is also referred to [10] for a comprehensive survey.
To solve the saddle point problem (1) when , Cao et al., in [15], proposed the shiftsplitting preconditioner
which is a skillful generalization of the idea of the shiftsplitting preconditioner initially introduced in [8] for solving a nonHermitian positive definite linear system where and is the identity matrix. Recently, Chen and Ma in [17] studied the twoparameter generalization of the preconditioner , say
for solving the saddle point linear systems (1) with , where and .
In this paper, we propose a modified generalized shiftsplitting (MGSS) preconditioner for the saddle point problem (1) with . The MGSS preconditioner is based on a splitting of the saddle point matrix which results in an unconditionally convergent stationary iterative method. Moreover, a relaxed version of the MGSS preconditioner is presented and the eigenvalues distribution of the corresponding preconditioned matrix is studied.
2 The generalized shiftsplitting preconditioner
Let . Consider the splitting , where
(2) 
This splitting leads to the following stationary iterative method (the MGSS iterative scheme) (1)
(3) 
for solving the linear system (1), where is an initial guess. Therefore, the iteration matrix of the MGSS iterative method is given by . In the sequel, the convergence of the proposed method is studied. It is well known that the iterative method (3) is convergent for every initial guess if and only if , where denotes the spectral radius of (see [1]). Let be an eigenvector corresponding to the eigenvalue of . Then, we have or equivalently
(4)  
(5) 
Lemma 1.
Let . If is an eigenvalue of the matrix , then .
Proof.
Theorem 1.
Let be an eigenvalue of the matrix and . Then .
Proof.
We first show that . If , then it follows from Eq. (4) that . Therefore, from Lemma 1 we conclude that and this yields , since has full rank. This is a contradiction because is an eigenvector of .
Without loss of generality let . Multiplying both sides of (4) by yields
(6) 
We consider two cases and . If , then Eq. (6) implies
We now assume that . In this case, from Eq. (5) we obtain
(7) 
Substituting Eq. (7) in (6) yields
Letting , , and , it follows from the latter equation that
(8) 
Since and , form (8) we see that
Hence, we have
which completes the proof. ∎
Remark 1.
Theorem 1 guarantees the convergence of the MGSS method, however the stationary iterative method (3) is typically too slow for the method to be competitive. Nevertheless, it serves the preconditioner for a Krylov subspace method such as GMRES, or its restarted version GMRES() to solve system (1). At each step of the MGSS iterative method or applying the shiftsplitting preconditioner within a Krylov subspace method, we need to compute a vector of the form for a given where and . It is not difficult to check that
where . Hence,
(9) 
By using Eq. (9) we state Algorithm 1 to compute the vector where and as following.
Algorithm 1.
Computation of .

Solve for .

Compute .

Solve for .

Solve for .

Compute .
Obviously, the matrix is SPD. In practical implementation of Algorithm 1 one may use the conjugate gradient (CG) method or a preconditioned CG (PCG) method to solve the system of Step 3. It is noted that, since the matrix is SPD and of small size in comparison to the size of , we use the Cholesky factorization of in Steps 1, 3 and 4.
In the sequel, we consider the relaxed MGSS (RMGSS) preconditioner
for the saddle point problem (1). The next theorem discusses eigenvalues distribution of .
Theorem 2.
The preconditioned matrix has an eigenvalue 1 with multiplicity and the remaining eigenvalues are , , where ’s are the eigenvalues of the matrix .
Proof.
By using Eq. (9) (with and neglecting the prefactor 2) we obtain
where . Therefore, has an eigenvalue 1 with multiplicity and the remaining eigenvalues are the eigenvalues of and this completes the proof. ∎
Remark 2.
Similar to Theorem 3.2 in [15] it can be shown that the dimension of the Krylov subspace is at most . This shows that the GMRES iterative method to solve (1) in conjunction with the preconditioner terminates in most iterations and provides the exact solution of the system (see Proposition 6.2 in [27]). Obviously, the matrix is SPD and as a result its eigenvalues are positive. Therefore, from Theorem 2 we see that or . Hence, the eigenvalues of would be well clustered with a nice clustering of its eigenvalues around the point for small values of . In this case, the matrix would be well conditioned.
3 Numerical Experiments
In this section, some numerical experiments are given to show the effectiveness of the MGSS and RMGSS preconditioners. All the numerical experiments presented in this section were computed in double precision using some MATLAB codes on a Laptop with Intel Core i7 CPU 1.8 GHz, 6GB RAM. We consider the Stokes problem (see [25, page 221])
(10) 
in , with the exact solution
We use the interpolant of u for specifying Dirichlet conditions everywhere on the boundary. The test problems were generated by using the IFISS software package written by Elman et al. [prec1]. The IFISS package were used to discretize the problem (10) using stabilized Q1P0 finite elements. We used as the stabilization parameter. Matrix properties of the test problem for different sizes are given in Table 1.
We use GMRES() in conjunction with the preconditioners and . We also compare the results of the MGSS and RMGSS preconditioners with those of the Hermitian and skewHermitian (HSS) preconditioner (see [9]). To show the effectiveness of the methods we also give the results of GMRES(5) without preconditioning. We use a null vector as an initial guess and the stopping criterion . In the implementation of the preconditioners and , in Algorithm 1, we use the Cholesky factorization of and the CG method to solve the system of Step 3. It is noted that, in the CG method, the iteration is terminated when the residual norm is reduced by a factor of 100 or when the number of iterations exceeds 40. Numerical results are given in Table 2 for different sizes of the problem. In this table “IT” denotes for the number of iterations for the convergence and “CPU” stands for the corresponding CPU times (in seconds). For the HSS preconditioner we experimentally computed the optimal value of the involving parameter (see [9]) of the method. For the MGSS method we present the numerical results for and and in the RMGSS method for . As the numerical results show the all the preconditioners are effective. We also observe that the MGSS and RMGSS preconditioners are superior to the HSS preconditioners in terms of both iteration count and CPU times. For more investigation the eigenvalues distribution of the matrices , with and with are displayed in Figure 1. As we see the eigenvalues of and are more clustered than the matrix .
4 Conclusion
We have presented a modification of the generalized shiftsplitting method to solve the saddle point problem with symmetric positive definite (1,1)block and symmetric positive semidefinite (2,2)block. Then the resulted preconditioner and its relaxed version have been implemented to precondition the saddle point problem. We have seen that both of the preconditioners are effective when they are combined with the GMRES(m) algorithm. Our numerical results show that the proposed preconditioners are more effective than the HSS preconditioner.
Grid  nnz(A)  nnz(B)  nnz(C)  

578  256  3826  1800  768  
2178  1024  16818  7688  3072  
8450  4096  70450  31752  12288  
33282  16384  288306  129032  49152 
GMRES(5)  MGSS  RMGSS  HSS  

Grid  IT  CPU  (,)  IT  CPU  IT  CPU  IT  CPU  
18  0.119  6  0.048  0.001  6  0.048  0.085  12  0.051  
6  0.048  
33  0.530  6  0.152  0.001  5  0.145  0.050  18  0.199  
6  0.153  
147  8.066  14  1.305  0.001  7  1.021  0.020  27  2.449  
7  0.905  
349  76.9  27  10.492  0.001  15  10.016  0.020  41  24.633  
14  9.284 
Acknowledgements
The authors are grateful to the anonymous referees for their valuable comments and suggestions which improved the quality of this paper.
References
 [1] O. Axelsson and V.A. Barker, Finite Element Solution of Boundary Value Problems, Academic Press, Orlando, FL, 1984.
 [2] Z. Z. Bai, Optimal parameters in the HSSlike methods for saddlepoint problems, Numer. Linear Algebra Appl. 16 (2009) 447479.
 [3] Z. Z. Bai and G. H. Golub, Accelerated Hermitian and skewHermitian splitting methods for saddlepoint problems, IMA J. Numer. Anal. 27 (2007) 123.
 [4] Z. Z. Bai, G. H. Golub and M. K. Ng, Hermitian and skewHermitian splitting methods for nonHermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603626.
 [5] Z. Z. Bai, G. H. Golub and J. Y. Pan, Preconditioned Hermitian and skewHermitian splitting methods for nonHermitian positive semidefinite linear systems, Numer. Math. 98 (2004) 132.
 [6] Z. Z. Bai, B. Parlett and Z. Q. Wang, On generalized successive overrelaxation methods for augmented linear systems, Numer. Math. 102 (2005) 138.
 [7] Z. Z. Bai and Z. Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl. 428 (2008) 29002932.
 [8] Z. Z. Bai, J. F. Yin and Y. F. Su, A shiftsplitting preconditioner for nonHermitian positive definite matrics, J. Comput. Math. 24 (2006) 539552.
 [9] M. Benzi and G.H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl. 26 (2004) 2041.
 [10] M. Benzi, G.H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005) 1137.
 [11] M. Benzi and X.P. Guo, A dimensional split preconditioner for Stokes and linearized NavierStokes equations, Appl. Numer. Math. 61 (2011) 6676.
 [12] M. Benzi, M. K. Ng, Q. Niu and Z. Wang. A relaxed dimensional fractorization preconditioner for the incompressible NavierStokes equations, J. Comput. Phys. 230 (2011) 61856202.
 [13] J. H. Bramble, J. E. Pasciak, and A. T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34 (1997) 10721092.
 [14] F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods, Springer, New York, 1991.
 [15] Y. Cao, J. Du and Q. Niu, Shiftsplitting preconditioners for saddle point problems, J. Comput. Appl. Math. 272 (2014) 239250.
 [16] Y. Cao, M. Q. Jiang and Y. L. Zheng, A splitting preconditioner for saddle point problems, Numer. Linear Algebra Appl. 18 (2011) 875895.
 [17] C. Chen and C. Ma, A generalized shiftsplitting preconditioner for saddle point problems, Appl. Math. Lett. 43 (2015) 4955.
 [18] F. Ding, P. X. Liu and J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Appl. Math. Comput. 197 (2008) 4150.
 [19] F. Ding and T. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Trans. Automat. Control. 50(2005) 12161221.
 [20] F. Ding and T. Chen, On iterative solutions of general coupled matrix equations, SIAM J. Control Optim. 44(2006) 22692284.
 [21] F. Ding and H. Zhang, Gradientbased iterative algorithm for a class of the coupled matrix equations related to control systems, IET Control Theory Appl. 8(2014) 15881595.
 [22] H. Elman, Multigrid and Krylov subspace methods for the discrete Stokes equations, Internat. J. Numer. Methods Fluids. 22 (1996) 755770.
 [23] H. C. Elman and G. H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31 (1994) 16451661.
 [24] H.C. Elman, A. Ramage, D.J. Silvester, IFISS: A Matlab toolbox for modelling incompressible flow, ACM Trans. Math. Software. 33 (2007), Article 14.
 [25] H. C. Elman, D. J. Silvester and A.J. Wathen, Finite Elements and Fast Iterative Solvers, Oxford University Press, Oxford, 2003.
 [26] A. Klawonn, Blocktriangular preconditioners for saddlepoint problems with a penalty term, SIAM J. Sci. Comput. 19 (1998) 172184.
 [27] Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, 2003.
 [28] D. Silvester and A. Wathen, Fast iterative solution of stabilized Stokes systems, Part II: Using general block preconditioners, SIAM J. Numer. Anal. 31 (1994) 13521367.
 [29] S.L. Wu, T.Z. Huang and C.X. Li, Generalized block triangular preconditioner for symmetric saddle point problems, Computing. 84 (2009) 183208.
 [30] S.L. Wu, T.Z. Huang and C.X. Li, Modified block preconditioners for the discretized timeharmonic Maxwell equations in mixed form, J. Comput. Appl. Math. 237 (2013) 419431.
 [31] S.L. Wu, L. Bergamaschi and C.X. Li, A note on eigenvalue distribution of constraintpreconditioned symmetric saddle point matrices, Numer. Linear Algebra Appl. 21 (2014) 171174.
 [32] S.L. Wu and C.X. Li, Eigenvalue estimates of an indefinite block triangular preconditioner for saddle point problem, J. Comput. Appl. Math. 260 (2014) 349355.