A shift splitting concept is introduced and, correspondingly, a shift-splitting iteration scheme and a shift-splitting preconditioner are presented, for solving the large sparse system of linear equations of which the coefficient matrix is an ill-conditioned non-Hermitian positive definite matrix. The convergence property of the shift-splitting iteration method and the eigenvalue distribution of the shift-splitting preconditioned matrix are discussed in depth, and the best possible choice of the shift is investigated in detail. Numerical computations show that the shift-splitting preconditioner can induce accurate, robust and effective preconditioned Krylov subspace iteration methods for solving the large sparse non-Hermitian positive definite systems of linear equations.
We construct a modified Bernoulli iteration method for solving the quadratic matrix equation AX^2 + BX + C = 0, where A, B and C are square matrices. This method is motivated from the Gauss-Seidel iteration for solving linear systems and the ShermanMorrison-Woodbury formula for updating matrices. Under suitable conditions, we prove the local linear convergence of the new method. An algorithm is presented to find the solution of the quadratic matrix equation and some numerical results are given to show the feasibility and the effectiveness of the algorithm. In addition, we also describe and analyze the block version of the modified Bernoulli iteration method.
The restrictively preconditioned conjugate gradient (RPCG) method is further developed to solve large sparse system of linear equations of a block two-by-two structure. The basic idea of this new approach is that we apply the RPCG method to the normal-residual equation of the block two-by-two linear system and construct each required approximate matrix by making use of the incomplete orthogonal factorization of the involved matrix blocks. Numerical experiments show that the new method, called the restrictively preconditioned conjugate gradient on normal residual (RPCGNR), is more robust and effective than either the known RPCG method or the standard conjugate gradient on normal residual (CGNR) method when being used for solving the large sparse saddle point problems.