Journal of Applied Mathematics and Physics
Vol.06 No.02(2018), Article ID:82704,11 pages
10.4236/jamp.2018.62039

Efficient Iterative Method for Solving the General Restricted Linear Equation

Xiaoji Liu1, Weirong Du1, Yaoming Yu2, Yonghui Qin3*

1Faculty of Science, Guangxi University for Nationalities, Nanning, China

2College of Education, Shanghai Normal University, Shanghai, China

3College of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin, China

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 7, 2018; Accepted: February 25, 2018; Published: February 28, 2018

ABSTRACT

An iterative method is developed for solving the solution of the general restricted linear equation. The convergence, stability, and error estimate are given. Numerical experiments are presented to demonstrate the efficiency and accuracy.

Keywords:

Linear Equation, Iterative Method, Error Estimate

1. Introduction

Let r m × n be the set of all m × n complex matrices with rank r. For any A r m × n , let A 2 , R ( A ) , and N ( A ) be matrix spectral norm, range space and null space, respectively. Let ρ ( A ) be the spectral radius of the matrix A. For any A C r m × n , if there exists a matrix X such that X A X = X , then X is called a {2}-inverse (or an outer inverse) of A [1] .

The restricted linear equation is widely applied in many practical problems [2] [3] [4] . In this paper, we consider the general restricted linear equations as

A x = b , x T , (1)

where A r m × n and T is a subspace of n . As the conclusion given in [2] , (1) has a unique solution if and only if

b A T , T N ( A ) = { 0 } . (2)

In recent years, some numerical methods have been developed to solve such as problems (1). The Cramer rule method is given in [2] and then this method is developed for computing the unique solution of restricted matrix equations over the quaternion skew field in [5] . An iterative method is investigated for finding some solution of (1) in [6] . In [7] , a subproper and regular splittings iterative method is constructed. The PCR algorithm is applied for parallel computing the solution of (1) in [8] . In [4] , a new iterative method is developed and its convergence analysis is also considered. The result on condensed Cramer’s rule is given for solving the general solution to the restricted quaternion matrix equation in [9] . In [10] [11] , authors develop the determinantal representation of the generalized inverse A T , S ( 1 ) for the unique solution of (1). The non-stationary Richardson iterative method is given for solving the general restricted linear equation (1) in [4] . An iterative method is applied to computing the generalized inverse in [13] . In this paper, we develop a high order iterative method to solve the problem (1). The proposed method can be implemented with any initial x 0 T and it has higher-order accuracy. The necessary and sufficient condition of convergence analysis also is given, which is different the condition given in [14] . The stability of our scheme is also considered.

The paper is organized as follows. In Section 2, an iterative method for the general restricted linear equation is developed. The convergence analysis of our method is considered, an error estimate is also given in Section 3. In Section 4, some numerical examples are presented to test the effectiveness of our method.

2. Preliminaries and Iterative Scheme

In this section, we develop an iterative method for computing the solution of the general restricted linear Equation (1).

Lemma 1 ( [1] ) Let A m × n and T and S be subspaces of n and m , respectively, with dim T = dim S = t r . Then A has a {2}-inverse (or an outer inverse) X such that R ( X ) = T and N ( X ) = S if and only if

A T S = m ,

in which case X is unique ( denoted by A T , S ( 2 ) ).

Proposition 2 ( [2] ) Let A m × n and T and S be subspaces of n and m , respectively. Assume that the condition (2) is satisfied, then the unique solution of (1) can be expressed by

x = A T , S ( 2 ) b . (3)

Let L and M be complementary subspaces of m , i.e., L M = m , the projection P L be a linear transformation such that P L x = x , x L and P L y = 0 , y M .

Lemma 3 ( [12] ) Assume that A m × n and B m × n with m n . Then the n eigenvalues BA are the m eigenvalues of AB together with n m zeros.

In this paper, we construct our iterative scheme as follows:

{ Z k = [ t I C t 2 Z k 1 A + + ( 1 ) t 1 ( Z k 1 A ) t 1 ] Z k 1 , x k = x k 1 + Z k ( b A x k 1 ) , (4)

where k = 1 , 2 , 3 , , t , and t 2 . Here, we take the initial value Z 0 = β Y in our scheme (4), where β is a relaxation factor. Thus, if t = 2 , then (4) degenerates to the non-stationary Richardson iterative method given in [4] .

Lemma 4 Let A m × n , T and S be subspaces of n and m , respectively. Assume that Z 0 = β Y and R ( Y ) T , where β is a nonzero constant and Y n × m . For any initial x 0 T , the iterative scheme (4) converges to some solution of (1) if and only if

ρ ( P T Z 0 A ) < 1 ,

where a projection P T from m onto T.

Proof. The proof can be given as following the line of in [4] . □

3. Convergence Analysis

Now, we consider the convergence analysis of our iterative method (4).

Theorem 5 Let A m × n , T and S be subspaces of n and m , respectively. Assume that A T S = m and Y n m satisfies R ( Y ) T and N ( Y ) S , where dim T = dim S . If b A T , for the given initial value Z 0 = β Y , β 0 and x 0 T , then the sequence { x k } generated by iteration (4) converges to the unique solution of (1) if and only if ρ ( P T Z 0 A ) < 1 , where P T is a projection. In this case, we have

lim k x k = ( I P T + Z 0 A ) 1 Z 0 b . (5)

Further, we have

x k x q t ( t k 1 ) t 1 ( x 0 Z 0 b + q 1 q Z 0 b ) . (6)

where q = P T A Z 0 .

Proof. For any x T N ( A ) , we have A x = 0 . By A T S = m and Lemma 1, there exists a matrix X such that R ( X ) = T and X A X = X . Now, assume that y m satisfies x = X y , we have

x = X y = X A X y = X A x = 0 , T N ( A ) = { 0 } .

If b A T , then (2) is satisfied. Therefore, by ( [4] , Lemma 1.1), the scheme (4) converges to the unique solution of (1).

Since R ( Y ) T , P T Z 0 = Z 0 , and then by (4), we obtain P T Z k = Z k . Since x 0 T , P T x k = x k by (4b), and therefore

x k = x k 1 + Z k ( b A x k 1 ) = Z k b + ( P T Z k A ) x k 1 . (7)

If P T W = W , then

( I Z k 1 A ) W = ( P T Z k 1 A ) W . (8)

By (4) and (8), we obtain

Z k A = i = 1 t ( 1 ) i 1 C t i ( Z k 1 A ) i = i = 0 t 1 ( P T Z k 1 A ) i Z k 1 A ,

( P T Z k A ) W = ( P T Z k 1 A ) t W = ( P T Z 0 A ) t k W . (9)

By induction on k, it leads to

Z k A = i = 0 t 1 ( P T Z 0 A ) i t k 1 Z k 1 A = i 1 = 0 t 1 ( P T Z 0 A ) i 1 t k 1 i 2 = 0 t 1 ( P T Z 0 A ) i 2 t k 2 Z k 2 A = S k Z 0 A , (10)

where S k : = i = 0 t k 1 ( P T Z 0 A ) i . From b A T , w T , we have b = A w and it implies that Z k b = Z k A w = S k Z 0 A w = S k Z 0 b . By (9), we have

x k = S k Z 0 b + ( P T Z 0 A ) t k x k 1 = S k Z 0 b + i = 0 k 1 j = 0 i ( P T Z 0 A ) t k j S k 1 i Z 0 b + i = 0 k 1 ( P T Z 0 A ) t k i x 0 = i = 0 k ( P T Z 0 A ) t k + 1 t k + 1 i t 1 S k i Z 0 b + ( P T Z 0 A ) t k + 1 t t 1 x 0 . (11)

Note that ( P T Z 0 A ) S k = S k ( P T Z 0 A ) , [ I ( P T Z 0 A ) ] S k = I ( P T Z 0 A ) t k . From (11), we obtain

[ I ( P T Z 0 A ) ] x k = [ I ( P T Z 0 A ) t k + 1 t t 1 ] Z 0 b + [ I ( P T Z 0 A ) ] ( P T Z 0 A ) t k + 1 t t 1 x 0 . (12)

If ρ ( P T Z 0 A ) < 1 , then I ( P T Z 0 A ) is invertible and it implies that x k converges as k . For convenience, let its limit denote by x . Thus, we have [ I ( P T Z 0 A ) ] x = Z 0 b . Since T is closed and x k T , we have x T and P T x = x . Thus, Z 0 ( A x b ) = 0 and A x b N ( Z 0 ) A T = N ( Y ) A T S A T = { 0 } . Note that x is the unique solution of (1) and x = [ I ( P T Z 0 A ) ] 1 Z 0 b . From (4), it follows that

x k + 1 x = ( P T Z k + 1 A ) ( x 0 x ) = ( P T Z 0 A ) t ( t k + 1 1 ) t 1 ( x 0 x ) . (13)

From Lemma 4, we have ρ ( P T Z 0 A ) < 1 and

x k x = ( P T Z 0 A ) t ( t k 1 ) t 1 ( ( x 0 Z 0 b ) ( I ( I P T + Z 0 A ) 1 ) Z 0 b ) .

Therefore,

x k x q t ( t k 1 ) t 1 ( x 0 Z 0 b + I ( I q ) 1 Z 0 b ) q t ( t k 1 ) t 1 ( x 0 Z 0 b + q 1 q Z 0 b ) ,

where q = P T A Z 0 . □

Remark If N ( Y ) S in Theorem 5 is removed and t = 2 , then the result degenerates into that given in ( [4] , Theorem 3.2). However, the sequence { x k } given in (4) does not converse to A T , S ( 2 ) b , the unique solution of (1) is given by Proposition 2. Here, it can be tested by the following example:

Let A and b of the general restricted linear Equation (1) be

A = [ 2 2.5 0.2 0.3 0 0 1.5 0 0 0 0 0 0.2 0.2 0 0 0 0 0.25 0 0 0 0 0 0 0 0 0 0 0 ] 4 6 × 5 , b = [ 7 3 0.2 0.2 0 0 ] . (14)

The matrix Y is

Y = [ 1.2 2 0.2 2 1 0 0 2 5 2 0 0 0 0 0.25 0.1 0 0 0 0.1 0 1.3 0 0 1 0 0 0 0 0 ]

Note that R ( Y ) T , but N ( Y ) S . If take β = 0.16 , then ρ ( P T Z 0 A ) < 1 . Here, we choose t = 2 in (4). Thus, it can be seen as the method given in [4] . The errors, , and of (4) with and are presented in Table 1. Numerical results given in Table 1 show that, but. Thus, the limit of is not the solution of (1) presented by Proposition 2.

Theorem 6 Under the same conditions as in Theorem 5. The iterative scheme (4) is stable for solving (1), where.

Proof. Let and be numerical perturbations of and given in (4), respectively. Thus, we can express as,. If , , then,. Here, we formally neglect quadratic terms containing,. Since, we get

By (4), we derive

(15)

From (9) and (4), we have and

Table 1. Error results of (4) with,.

Therefore, we obtain

(16)

By (4), we have. Similarly, we have

(17)

By (17) and (16), we derive

(18)

Thus, by, we have

(19)

If, then and for any k,

where. It follows that the iterative method (4) is asymptotically stable. □

4. Numerical Examples

In the section, we give an example to test the accuracy of our scheme (4), which is implemented by our main code given in Appendix, and make a comparison with the method given in [4] . We also apply our scheme to solve the restricted linear system (1) with taking different t and intial value.

Example 1 Consider the restricted linear system (1) with a coefficient matrix being random of order, where, 900, 1000, 2000 of index one and random vectors. Let be a random matrix. Take, , and a random vector. Here, we make a comparison the mean CPU time(MCT) and error bounds of our scheme (4) with those given by the method of [4] . The stopping criteria used is given as in [4] by

Numerical results given in Table 2 and Figure 1 show that the accuracy of our method is similar to those given in [4] and our method cost less time (MCT) than the method of [4] . We can see that, to obtain the similar accuracy, the MCT of our scheme is similar to those given in [4] from Figure 2.

Table 2. The mean CPU time (MCT) and error in Example 1.

Figure 1. Error in example 1.

Example 2 Consider the general restricted linear Equation (1), where A and b is given as in (14). Here, we use the scheme (4) to solve the example. Let,. Take

Obviously, , , , , and. To verify the accuracy of our method, we present the generalized inverse as

Figure 2. The MCT in Example 1.

Table 3. Error for (4) in Example 2 with.

To ensure, we take parameter in Table 3 and in Table 4, respectively. We present the errors , , and in 2-norm as in Table 3 and in Table 4, respectively.

Table 4. Error for (4) in Example 2 with.

From the numerical results given in Table 3 and Table 4, we can see that the scheme (4) has high order accuracy and these results given with are better than those obtained by, respectively.

5. Conclusion

The high order iterative method has been derived for solving the general restricted linear equation. The convergence and stability of our method also have derived. Numerical experiments have presented to demonstrate the efficiency and accuracy.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 11061005, 11701119, 11761024), the Natural Science Foundation of Guangxi (No. 2017GXNSFBA198053), the Ministry of Education Science and Technology Key Project (210164), and the open fund of Guangxi Key laboratory of hybrid computation and IC design analysis (HCIC201607).

Cite this paper

Liu, X.J., Du, W.R., Yu, Y.M. and Qin, Y.H. (2018) Efficient Iterative Method for Solving the General Restricted Linear Equation. Journal of Applied Mathematics and Physics, 6, 418-428. https://doi.org/10.4236/jamp.2018.62039

References

  1. 1. Ben-Israel, A. and Greville, T.N.E. (2003) Generalized Inverses. Volume 15 of CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, 2nd Edition, Springer-Verlag, New York.

  2. 2. Chen, Y.L. (1993) A Cramer Rule for Solution of the General Restricted Linear Equation. Linear and Multilinear Algebra, 34, 177-186. https://doi.org/10.1080/03081089308818219

  3. 3. Wei, Y.M. and Wu, H.B. (2001) Splitting Methods for Computing the Generalized Inverse and Rectangular Systems. International Journal of Computer Mathematics, 77, 401-424. https://doi.org/10.1080/00207160108805075

  4. 4. Srivastava, S. and Gupta, D.K. (2015) An Iterative Method for Solving General Restricted Linear Equations. Applied Mathematics and Computation, 262, 344-353. https://doi.org/10.1016/j.amc.2015.04.047

  5. 5. Song, G.J., Wang, Q.-W. and Chang, H.-X. (2011) Cramer Rule for the Unique Solution of Restricted Matrix Equations over the Quaternion Skew field. Computers & Mathematics with Applications, 61, 1576-1589. https://doi.org/10.1016/j.camwa.2011.01.026

  6. 6. Chen, Y.L. (1997) Iterative Methods for Solving Restricted Linear Equations. Applied Mathematics and Computation, 86, 171-184. https://doi.org/10.1016/S0096-3003(96)00180-4

  7. 7. Wei, Y.M., Li, X.Z. and Wu, H.B. (2003) Subproper and Regular Splittings for Restricted Rectangular Linear System. Applied Mathematics and Computation, 136, 535-547. https://doi.org/10.1016/S0096-3003(02)00078-4

  8. 8. Yu, Y.M. (2008) PCR Algorithm for Parallel Computing the Solution of the General Restricted Linear Equations. Journal of Applied Mathematics and Computing, 27, 125-136. https://doi.org/10.1007/s12190-008-0062-3

  9. 9. Song, G.-J. and Dong, C.-Z. (2017) New Results on Condensed Cramer’s Rule for the General Solution to Some Restricted Quaternion Matrix Equations. Journal of Applied Mathematics and Computing, 53, 321-341. https://doi.org/10.1007/s12190-015-0970-y

  10. 10. Cai, J. and Chen, G.L. (2007) On Determinantal Representation for the Generalized Inverse and Its Applications. Numerical Linear Algebra with Applications, 14, 169-182. https://doi.org/10.1002/nla.513

  11. 11. Liu, X.J., Zhu, G.Y., Zhou, G.P. and Yu, Y.M. (2012) An Analog of the Adjugate Matrix for the Outer Inverse . Mathematical Problems in Engineering, Article ID: 591256.

  12. 12. Roger, A.H. and Johnson, C.R. (2013) Matrix Analysis. 2nd Edition, Cambridge University Press, Cambridge.

  13. 13. Liu, X.J., Jin, H.W. and Yu, Y.M. (2013) Higher-Order Convergent Iterative Method for Computing the Generalized Inverse and Its Application to Toeplitz Matrices. Linear Algebra and Its Applications, 439, 1635-1650. https://doi.org/10.1016/j.laa.2013.05.005

  14. 14. Srivastava, S., Stanimirovic, P.S., Katsikis, V.N. and Gupta, D.K. (2017) A Family of Iterative Methods with Accelerated Convergence for Restricted Linear System of Equations. Mediterranean Journal of Mathematics, 14, 222. https://doi.org/10.1007/s00009-017-1020-9

Appendix

function hocigrlscm()

;;

fprintf(' )

fprintf('------------------------BEGING----------------------------)

;;

for,

tic;;;;

for

;

; % compute

end

;

;

clear;

itm = toc;

if,; break, end

end % END it