A Smoothing Neural Network Algorithm for Absolute Value Equations

In this paper, we give a smoothing neural network algorithm for absolute value equations (AVE). By using smoothing function, we reformulate the AVE as a differentiable unconstrained optimization and we establish a steep descent method to solve it. We prove the stability and the equilibrium state of the neural network to be a solution of the AVE. The numerical tests show the efficient of the proposed algorithm.


Introduction
Consider the following absolute value problem [1]- [3]: where , , x is absolute value of x, it is a subclass of absolute value equations Ax B x b − = which is proposed by Rohn [4], and it is a NP-hard problem [1].
The AVE has closed relation with some important problems, for example, the linear programming, Quadratic programming problem and the bimatrix game problem.The above problems can be transformed into the linear complementarity problem, and the linear complementarity problem can be transformed into the absolute value equations.Due to its simple and special structure and application value, the research on absolute value equation has drawn attention of many researchers.Mangasarian [5] pointed out the relationship between backpack feasibility problem and the AVE.The problem of AVE has been studied deeply by Yamashita and Fukushima [6], and the results of the research on the problem of AVE are applied to the problem of location selection, good re-sults are obtained.The numerical solution methods of AVE, such as Newton method, quasi-Newton method, are reachable in [7]- [12].
In this paper, we present a smooth approximation function which is based on neural network method to solve the AVE.By using a smooth approximation function of x , we turn it into a differentiable unconstrained optimization problem.Furthermore, we obtain the approximate solution of the original problem based on our established unconstrained optimization problem and the neural network model.Compared with the Newton method, the neural network model needs less requirement for the hardware of compute and the iterative process is real-time.

The Smoothing Reformulating of AVE
The absolute value Equation ( 1) is equivalent to the nonlinear equations: where ( ) : where κ is not dependent on the x.
In this paper, we use the aggregate function [13] to give a smooth approximation of the absolute value equation: Let ( ) x φ = , so every component of the absolute value function can be recorded as , , the definition of smoothing function is as follows


So the function of absolute value x is obtained as follows , , , where ( ) is the smoothing approximation of ( ) ( ) is said as the energy function of the neural network.Thus, the approximate solution of the absolute value equation is transformed to the global optimal solution of the optimization problem ( )

Neural Network Model for Absolute Value Equation
Consider the following unconstrained optimization problem ( ) ( ) the gradient can be calculated by the following formula: where ( )  now, we can give a neural network model for solving the absolute value equation, which is based on the steepest descent neural network model for (4).
( ) ( ) where τ is a parameter 1 τ > represents that one can use a larger step size in the simulation, specific details can be referred to [14]- [16].To simplify our analysis, we let 1 τ = throughout this paper.A block diagram (Figure 1) of the neural network is shown as follows.

Analysis of Stability and Existence
Next, we recall some materials about first order differential equations (ODE) [17]: where mapping.We also introduce three kinds of stability that will be discussed later.
x x t = is called an equilibrium point or a steady state of the dynamic system ( 6) if is a continuous mapping.Then, for any 0 0 t ≥ and 0 n x R ∈ , there exists a local solution ( ) If, in addition, H is locally Lipschitz con- tinuous at 0 x , then the solution is unique.Definition 3.2 (Asymptotic stability).An isolated equilibrium point * x is said to be asymptotically stable if in addition to being Lyapunov stable, it has the property that ( ) * x t be a solution for (6).An isolated equilibrium point * x is Lyapunov stable if for any ( )    [11] For any 0 Theorem 3.1 ( ) x be the solution of the absolute value equation.1) The function ( ) is obtained by our smooth approximation.So ( ) is continuous with respect to x .Obviously ( ) have continuous partial derivatives at all components of the x .
2) Since ( ) So, by the Definition 3.4 we know that ( ) is Lyapunov function over some neighborhood * Ω of * x Theorem 3.2 Each solution of the absolute value equation is the equilibrium point of the neural network (5).Conversely, if 1 1 A − < , the equilibrium point of the neural network ( 5) is the solution of the absolute value equation.

Proof. Assume that *
x is the solution of the absolute value equation, since Next, we can prove that * x is not only Lyapunov stable and asymptotically stable.Theorem 3.3.Let the * x be the isolated equilibrium of the neural network.* x is the Lyapunov stability and asymptotic stability for neural networks.
Proof.Since * x is the isolated equilibrium of the neural network, * x the solution of the absolute value eq- uation is known by the Theorem 3.2.Therefore, is Lyapunov function over some neighborhood * Ω of * x , so by Lemma 3.2 the isolated equilibrium * x is Lyapunov stable.Because * x is isolated, it is not difficult to compute: x is asymptotic stability.

Numerical Experiment
In this section we give some smooth of numerical tests of neural network algorithm, due to the complementarity problem can be transformed to absolute value equations, we consider the linear complementarity problem which is equivalent to the absolute value equations as test cases.For a given matrix M and vector q , The linear complementarity problem ( ) From the Theorem 2 in the literature [11], if 1 is not the eigenvalues of the matrix M , then ( ) , LCP M q is equivalent to the following absolute value equation: where ( ) x M I z q = − + and z is the solution of the absolute value equation.
Example 1 [11].Consider the following linear complementary problem: Since1 is not included in the eigenvalues of M , then the linear complementary problem can be transformed into the following absolute value equation and they are equivalent: Ax x b − = where ( )( ) We can find ( ) is a solution of the absolute equation.By using the neural network model, the initial point is generated by x0 = rand (n,1), and the program is performed under the environment of MATLAB7.11.0.The following two figures (Figure 2 and Figure 3) describe how the approximate solution of example 1 and the energy function varies with time.

( )
x M I z q = − + , then we can capture the solution of the linear complementary problem ( ) , LCP M q , the solution is ( ) [11].Consider the following linear complementary problem: Through calculation, we can get one eigenvalue of M is 1.By literature [11], we can find that if 1 is the eigenvalue of matrix M , then the M and q of the linear complementary problem need to be multiplied by a positive constant λ and makes 1 not the of M (and the solution of the linear complementary pro- blem keeps invariant).Then we can transform linear complementary problem into absolute value equation by applied Theorem 2 and Theorem 3 in literature [11].
Set 3 λ = , then we can find that 1 is not included in the eigenvalues of AM .And ( ) , LCP M q and ( ) have the common optimal solution, while z can be transformed into the absolute value equation by applying the Theorem 2.Then, we have 6 3 3 3 24 3 6 0 3 18 , 3 0 3 6 12 Thus, we can get one solution of the absolute value equation whcih is Through calculation we can get one eigenvalue of M is 1.And the same as example 2, set 3 λ = , then, we can find that: 3 6 6 6 3 0 3 6 6 3 , 0 0 3 6 3 And the absolute value equation is: Ax x b − = , where: Thus, we can get the solution of the absolute value equation which is ( ) − , then the following two figures (Figure 6 and Figure 7) describe how the approximate solution of example 3 and the energy function varies with time Since ( ) x M I z q = − + , then we can capture the solution of the linear complementary problem LCP(M,q), the solution is z * = (2.5 2.5 0 2.5).

Conclusion
This paper adopted the aggregate function method to tackle the absolute value equation with smooth processing, and then turned the absolute value equation into a differentiable unconstrained optimization problem.In order to obtain the approximate solution of the original problem we use the proposed neural network model to solve the  unconstrained optimization problem.At the same time, we propose one neural network which is based on different energy function.Through the transformation between linear complementary problem and absolute value equation, it can be used to solve the linear complementary problem, too.For the traditional energy function based on the NCP function, we can avoid a lot of matrix calculation.Numerical examples show that the algorithm is very effective for solving this kind of absolute value equation, and the accuracy of solution can be controlled by the parameters completely.In view of the fact that it is relatively difficult to solve the absolute value equation, the proposed method in this paper can be used to solve the absolute value problem effectively.
Thus the absolute value equation is transformed into the following smooth equations

(
only if * x is the solution of the absolute value equation.Obviously, we got , the equilibrium point of the neural network(5) is the solution of the absolute value equation.

Figure 3 .
Figure 3. Transient behavior of energy function of example 1.

3 .
function the following two figures (Figure4and Figure5) describe how the approximate solution of example 2 and the energy function varies with time Since ( )x M I z q = − + , then we can capture the solution of the linear complementary problem LCP(M,q), Consider the following linear complementary problem:

Figure 5 .
Figure 5. Transient behavior of energy function of example 2.

Figure 7 .
Figure 7. Transient behavior of energy function of example 3.
Since it is a non smooth function, we construct a smooth function to approximate it.Definition 1.1 Smoothing approximation function, given a function isolated equilibrium point x is Lyapunov stable if there exists a Lyapunov function over some neighborhood * Ω of * x b) An isolated equilibrium point x is asymptotically stable if there is a Lyapunov function over some