A Generalization of Cramer ’ s Rule

In this paper, we find two formulas for the solutions of the following linear equation Ax b = , m n x IR b IR m n ∈ ∈ ≥ , , , where ( ) j i n m A a , × = is a n m × real matrix. This system has been well studied since the 1970s. It is known and simple proven that there is a solution for all n b IR ∈ if, and only if, the rows of A are linearly independent, and the minimum norm solution is given by the Moore-Penrose inverse formula, which is often denoted by A ; in this case, this solution is given by ( ) 1 † A b A AA b − ∗ ∗ = . Using this formula, Cramer’s Rule and Burgstahler’s Theorem (Theorem 2), we prove the following representation for this solution 2 1 1 1 1 2 2 1 1 1 2 2 1 1 2 2 2 2 2 2 2 1 1 2 2 2 1 1 2 1 2 2 1 2 2


Introduction
In this paper, we find a formula depending on determinants for the solutions of the following linear equation , , , or Now, if we define the column vectors then the system (2) also can be written as follows: , , 1, 2, , , where , ⋅ ⋅ denotes the innerproduct in m IR and A is n m × real matrix.Usually, one can apply Gauss Elimination Method to find some solutions of this system, and this method is a systematic procedure for solving systems like (1); it is based on the idea of reducing the augmented matrix 1,1 , to the form that is simple enough such that the system of equations can be solved by inspection.But, to my knowledge, in general there is not formula for the solutions of (1) in terms of determinants if m n ≠ .When m n = and ( ) det 0 A ≠ , the system (1) admits only one solution given by

( )
det 0 A ≠ , then the solution of the system (1) is given by the formula: where ( ) i A is the matrix obtained by replacing the entries in the ith column of A by the entries in the matrix 1 2 .
, , , n x x x  , then for all , 1, 2, , Using Moore-Penrose Inverse Formula and Cramer's Rule, one can prove the following Theorem.But, for better understanding of the reader, we will include here a direct proof of it.
Theorem Moreover, one solution for this equation is given by the following formula: ( ) where * A is the transpose of A (or the conjugate transpose of A in the complex case).Also, this solution coincides with the Cramer formula when n m = .In fact, this formula is given as follows: ( , , det where ( ) is the matrix obtained by replacing the entries in the jth column of * AA by the entries in the matrix The main results of this work are the following Theorems.Theorem 1.4.The solutions of (1)-( 3) given by (9) can be written as follows: Theorem 1.5., , , n l l l  formed by the rows of the matrix A is lineally independent in m IR .Moreover, a solution for the system (1) is given by the following formula: ( ) , , where the set of vectors 1 2 { , , , } n υ υ υ  is obtain by the Gram-Schmidt process and the numbers 1 2 , , , n c c c  are given by and [ ] ,

Proof of the Main Theorems
In this section we shall prove Theorems 1. .Also, we shall use some ideas from [2] and the following result from [3], pp 55. ( ) − .We will include here a direct proof of Theorem 1. 3    where ( ) is the matrix obtained by replacing the entries in the ith column of * AA by the entries in the matrix 1 2 .
Then, the solution ( ) 1) can be written as follows Now, we shall see that this solution has minimum norm.In fact, consider w in Therefore, x w ≤ , and x w = if x w = .Proof of Theorem 1.5.Suppose the system is solvable for all n b IR ∈ .Now, assume the existence of real numbers , 1, 2, , , .
, .IR given by the formula: , Then, system (1) will be equivalent to the following system: where , From here and using the formula (9) we complete the proof of this Theorem.

Examples and Particular Cases
In this section we shall consider some particular cases and examples to illustrate the results of this work.
In this case . Then, if we define the column vector .
Then, ( ) ( ) Therefore, a solution of the system (19) is given by: Example 2.2.Consider the following particular case of system (1) In this case 2 n = and 1,1 , Then, if we define the column vectors 1,1 , , Hence, from the formula (10) we obtain that: Therefore, a solution of the system (21) is given by: Now, we shall apply the foregoing formula or (12) to find the solution of the following system 1 2 If we define the column vectors and the solution of the system (1) is very simple and given by: 2 , 1 , 1, 2, , .
Now, we shall apply the formula (28) or ( 12) to find solution of the following system: If we define the column vectors , , l l L is an orthogonal set in 3 IR and the solution of this system is given by: 1 and 4 3 4 x = .

Variational Method to Obtain Solutions
Theorems 1.3, 1.4 and 1.5 give a formula for one solution of the system (1) which has minimum norma.But it is not the only way allowing to build solutions of this equation.Next, we shall present a variational method to obtain solutions of (1) as a minimum of the quadratic functional : So, there are infinitely many critical points given by Hence, a solution of the system is given by .
, the rows of A are linearly independent, and the minimum norm solution is given by the Moore-Penrose inverse formula, which is often denoted by A † ; in this case, this solution is given by here one can deduce the well known Cramer Rule which says: Theorem 1.1.(Cramer Rule 1704-1752) If A is n n × matrix with


In addition, this solution has minimum norm, i.e.,

Lemma 2 . 1 .
Let W and Z be Hilbert space, ∈ the adjoint operator, then the following statements holds, [(i)] =then from Theorem 1.1 (Cramer Rule) we obtain that:

3 . 1 . 1 . 3 . 1 .Example 3 . 1 .
is easy to see that (31) is in fact an optimality condition for the critical points of the quadratic functional j define above.Lemma Suppose the quadratic functional j has a minimizer of (1).Proof.First, we observe that j has the following form: b ξ is a point where j achieves its minimum value, we obtain that: Under the condition of Theorem 1.3, the solution given by the formulas (32) and (9) coincide.Theorem The system (1) is solvable if, and only if, the quadratic functional j defined by (30) has a minimum for all n b IR ∈ .Proof.Suppose (8) is solvable.Then, the matrix A viewed as an operator from Consequently, j is coercive and the existence of a minimum is ensured.The other way of the proof follows as in proposition 3.1.Now, we shall consider an example where Theorems 1.3, 1.4 and 1.5 can not be applied, but proposition 3.1 does.It considers the system with linearly independent rows critical points of the quadratic functional j given by (30) satisfy the equation: [1] interested generalization of Cramer Rule is done by Prof. Dr. Sylvan Burgstahler ([1]) from University of Minnesota, Duluth, where he taught for 20 years.This result is given by the following Theorem:

Proof of Theorem 1.3. The
just for better understanding of the reader.
*, .*AA is one to one.Since * AA is a n n × matrix, then applying Theorem 1.3 we obtain that system (17) has solution for all