A Class of Smoothing Modulus-Based Iterative Method for Solving Implicit Complementarity Problems

Abstract

In this paper, a class of smoothing modulus-based iterative method was presented for solving implicit complementarity problems. The main idea was to transform the implicit complementarity problem into an equivalent implicit fixed-point equation, then introduces a smoothing function to obtain its approximation solutions. The convergence analysis of the algorithm was given, and the efficiency of the algorithms was verified by numerical experiments.

Share and Cite:

Guo, C. , Li, C. and Luo, T. (2022) A Class of Smoothing Modulus-Based Iterative Method for Solving Implicit Complementarity Problems. American Journal of Computational Mathematics, 12, 197-208. doi: 10.4236/ajcm.2022.122011.

1. Introduction

Complementarity problems arise widely in many applications of science and engineering, such as elastic contact, economic transport, boundary problems in fluid dynamics, convex quadratic programming and inverse problems . In this paper, we consider the implicit complementarity problems (ICP) : find vectors $z,w\in {R}^{n}$, such that

$z-m\left(z\right)\ge 0,\text{}w=Mz+q\ge 0$

${\left(z-m\left(z\right)\right)}^{\text{T}}\left(Mz+q\right)=0$.

where $M\in {R}^{n×n}$ is a given matrix, $q\in {R}^{n}$ is a given constant vector, $m\left(\cdot \right)$ is a mapping from ${R}^{n}$ into itself, and if the mapping $m\left(\cdot \right)$ is a zero mapping, the ICP reduces to linear complementarity problem (LCP).

ICP was introduced into the complementarity theory as a mathematical tool in the study of some stochastic optimal control problems, it was first proposed by Bensoussan and Lion in . After decades of research and development, people have obtained the existence theorem of solutions to the implicit complementarity problem , and proposed many effective methods for solving implicit complementarity problems. Such as fixed-point method  , projection iteration method  , Schwarz method  etc. Apart from that Zheng and Qu  established a hybrid method for solving implicit complementarity problems with superconvergence, numerical examples show that this method has higher accuracy and faster convergence than some existing methods. Tian et al.  proposed an unconstrained and differentiable penalty method for solving the implicit complementarity problems. Xia and Li  firstly constructed modulus-based matrix splitting iterative method for solving nonlinear complementary problems. Then Hong and Li  presented modulus-based matrix splitting iterative method (MMS) for solving the implicit complementarity problems, and analyzes its convergence. Wang et al.  given new convergence conditions of MMS when the system matrix is a positive-definite matrix and an ${H}_{+}$ -matrix. On their basis, Yin et al.  proposed a class of accelerated modulus-based matrix splitting iteration methods to solve the implicit complementarity problems. Zheng and Vong  proposed a modified modulus-based matrix splitting iterative method to solve the implicit complementarity problems. Wang and Cao  constructed a two-step modulus-based matrix splitting iterative methods (TMMS) for solving implicit complementarity problems, numerical experiments show that this iterative method is more effective than the MMS methods.

Dong and Jiang proposed modular iteration method in . Its main idea is to transform the linear complementarity problems into an equivalent fixed-point equation system, and then solve it. Foutayeni et al.  constructed a smoothing function to approximate the linear complementarity problems, and proposed a class of modified Newton methods and $m+1$ -step iteration methods to solve the linear complementarity problems, obtaining an effective smoothing numerical algorithm. In this paper, we extend this method to solve the implicit complementarity problems. Firstly, we transform the implicit complementarity problems into an equivalent implicit fixed-point equation system. Since it is a non-differentiable system of absolute value equations, we introduce a smooth function to approximate the original problem. Then we construct a class of smoothing modulus-based iteration method for solving the approximated system of equations. Finally, we analyze the convergence of the new method, and verify its effectiveness by numerical experiments.

The organization of the paper is as follows. In Section 2, we establish the smoothing modulus-based iteration method for solving the implicit complementarity problems. The convergence of the smoothing modulus-based iteration method is presented in Section 3, and the numerical results about the new methods are shown and discussed in Section 4. In addition, some conclusion is given in Section 5.

2. The Smoothing Modulus-Based Iterative Method

In this section, we consider the following implicit complementarity problems (ICP): Given matrix $M\in {R}^{n×n}$ and constant vector $q\in {R}^{n}$, and $m\left(\cdot \right)$ is a mapping from ${R}^{n}$ into itself, find vectors $z,w\in {R}^{n}$, such that

$\begin{array}{l}z-m\left(z\right)\ge 0,\text{}w=Mz+q\ge 0\\ {\left(z-m\left(z\right)\right)}^{\text{T}}\left(Mz+q\right)=0.\end{array}$ (1)

Let $z-m\left(z\right)=\beta \left(|x|+x\right)$, $w=\alpha \left(|x|-x\right)$, transform the implicit complementarity problems into a fixed-point equation:

$\left(\alpha I+\beta M\right)x=\left(\alpha I-\beta M\right)|x|-M\cdot m\left(z\right)-q,$ (2)

where $\alpha ,\beta$ are two positive constants and $I\in {R}^{n×n}$ is the identity matrix.

Let $g\left(z\right)=z-m\left(z\right)$. Since $g\left(z\right)$ is an invertible function, then $z={g}^{-1}\left(\beta \left(|x|+x\right)\right)$. The implicit fixed-point equation of (2) is changed as following,

$\left(\Omega +A\right)x=\left(\Omega -A\right)|x|\text{}\text{​}\text{​}\text{​}\text{​}\text{​}\text{​}\text{​}\text{​}\text{​}\text{​}\text{​}\text{​}-M\cdot m\left({g}^{-1}\left(\beta \left(|x|+x\right)\right)\right)-q,$ (3)

where $\Omega =\alpha I\in {R}^{n×n}$, $A=\beta M\in {R}^{n×n}$.

Lemma 1. (a) If the solution of the implicit complementarity problems (1) is $\left(z,w\right)$, then $x=\frac{1}{2}\left(\frac{z-m\left(z\right)}{\beta }-\frac{w}{\alpha }\right)$ is the solution of (3).

(b) If the vectorx satisfies (3), then $z-m\left(z\right)=\beta \left(|x|+x\right)$, $w=\alpha \left(|x|-x\right)$ is the solution of (1).

Proof. According to Theorem 2.1 in , we can similarly prove it.

For the implicit fixed-point Equation (3), let $F\left(x\right)$ be a vector function:

$F\left(x\right)=\left(\Omega +A\right)x-\left(\Omega -A\right)|x|+M\cdot m\left(z\right)+q,$ (4)

where $\Omega =\alpha I\in {R}^{n×n}$, $A=\beta M\in {R}^{n×n}$.

Due to $|x|$, $F\left(x\right)$ is non-differentiable. We introduce a smoothing vector function from ${R}^{n}\to {R}^{n}$ 

${\left({x}^{2}+{\text{e}}^{-c}\right)}^{\frac{1}{2}}=\left({\left({x}_{1}^{2}+{\text{e}}^{-c}\right)}^{\frac{1}{2}},{\left({x}_{2}^{2}+{\text{e}}^{-c}\right)}^{\frac{1}{2}},\cdots ,{{\left({x}_{n}^{2}+{\text{e}}^{-c}\right)}^{\frac{1}{2}}\right)}^{\text{T}},$

where c is a large positive integer.

Substitute it into the function (4), we can get a smoothing function:

${F}_{c}\left(x\right)=\left(\Omega +A\right)x-\left(\Omega -A\right){\left({x}^{2}+{\text{e}}^{-c}\right)}^{\frac{1}{2}}+M\cdot m\left(z\right)+q\text{ },$

where $\Omega =\alpha I\in {R}^{n×n}$, $A=\beta M\in {R}^{n×n}$.

Lemma 2. When $c\to \infty$, ${F}_{c}\left(x\right)$ uniformly converges to $F\left(x\right)$.

Proof. According to Corollary 2.1 in , we similarly prove it. When $c\to \infty$,

${\left({x}^{2}+{\text{e}}^{-c}\right)}^{\frac{1}{2}}$ converges uniformly to $|x|$. Therefore, ${F}_{c}\left(x\right)$ uniformly converges to $F\left(x\right)$.

From Lemma 2, we know that if ${x}^{*}$ is the solution of ${F}_{c}\left(x\right)=0$, then ${x}^{*}$ converges to the solution of $F\left(x\right)=0$. And ${F}_{c}\left(x\right)=0$ is a smooth nonlinear system of equations, so we can use classical Newton method to solve it, in order to get a better initial value, we use following modulus-based iteration method.

Algorithm 1. (Modulus-Based Iteration Method)

Step 1: Given $\epsilon >0$, ${z}^{0}\in {R}^{n}$, set $k=0$.

Step 2: 1) Calculate the initial vector ${x}^{0}=\frac{1}{2}\left({z}^{k}-{w}^{k}-m\left({z}^{k}\right)\right)$, set $j=0$,

2) Iterative Computing ${x}^{j+1}\in {R}^{n}$ by solving the equations

$\left(\Omega +A\right){x}^{j+1}=\left(\Omega -A\right)|{x}^{j}|-M\cdot m\left({z}^{k}\right)-q$ (5)

3) ${z}^{k+1}=\beta \left(|{x}^{j+1}|+{x}^{j+1}\right)+m\left( z k \right)$

Step 3: If $\text{RES}\left({z}^{k}\right)=|{\left(M{z}^{k}+q\right)}^{\text{T}}\left({z}^{k}-m\left({z}^{k}\right)\right)|<\epsilon$, then stop.

Otherwise, set $k=k+1$ and return to Step 2.

According to Remark 1 in , ${x}^{k}\to {x}^{\ast }$, when $k\to \infty$. Therefore, there esists some ${k}_{0}$, $‖{x}^{{k}_{0}}-{x}^{*}‖<\delta$, $\delta >0$ is a constant. That means that ${x}^{{k}_{0}}$ is a better approximation vector. So we can construct the following new algorithm.

Algorithm 2. (Smoothing Modulus-Based Newton Method)

Step 1: Given $\epsilon >0$, $c>0$, set $k=0$.

Step 2: Get the initial vector ${x}^{0}$ through Algorithm 1.

Step 3: Compute ${F}_{c}\left({x}^{k}\right)$,

${{F}^{\prime }}_{c}\left({x}^{k}\right)=\left(\begin{array}{cccc}\frac{\partial {f}_{1}\left({x}^{k}\right)}{\partial {x}_{1}}& \frac{\partial {f}_{1}\left({x}^{k}\right)}{\partial {x}_{2}}& & \frac{\partial {f}_{1}\left({x}^{k}\right)}{\partial {x}_{n}}\\ \frac{\partial {f}_{2}\left({x}^{k}\right)}{\partial {x}_{1}}& \frac{\partial {f}_{2}\left({x}^{k}\right)}{\partial {x}_{2}}& \cdots & \frac{\partial {f}_{2}\left({x}^{k}\right)}{\partial {x}_{n}}\\ ⋮& ⋮& \ddots & ⋮\\ \frac{\partial {f}_{n}\left({x}^{k}\right)}{\partial {x}_{1}}& \frac{\partial {f}_{n}\left({x}^{k}\right)}{\partial {x}_{2}}& \cdots & \frac{\partial {f}_{n}\left({x}^{k}\right)}{\partial {x}_{n}}\end{array}\right)$,

$\Delta {x}^{k}=-{\left({{F}^{\prime }}_{c}\left({x}^{k}\right)\right)}^{-1}{F}_{c}\left({x}^{k}\right)$.

Step 4: Compute

${x}^{k+1}={x}^{k}+\Delta {x}^{k}$, ${z}^{k+1}=\beta \left(|{x}^{k+1}|+{x}^{k+1}\right)+m\left({z}^{k}\right)$.

Step 5: If ${‖{x}^{k+1}-{x}^{k}‖}_{2}<\epsilon$, then stop and output

$z=\beta \left(|{x}^{k+1}|+{x}^{k+1}\right)+m\left({z}^{k+1}\right)$,

$w=\alpha \left(|{x}^{k+1}|-{x}^{k+1}\right)$.

Otherwise, let ${x}^{k}={x}^{k+1}$, ${z}^{k}={z}^{k+1}$, $k=k+1$, return to Step 2.

The classic Newton method needs to choose a good initial vector for better convergence. We introduce the parameter ${\xi }_{1},{\xi }_{2}$, and use the following iterative sequence to improve the classical Newton method.

$\left\{\begin{array}{l}{y}^{k}={x}^{k}±{\xi }_{1}{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}{F}_{c}\left({x}^{k}\right),\\ {x}^{k+1}={y}^{k}-{\xi }_{2}{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}{F}_{c}\left({y}^{k}\right).\end{array}$ (6)

We take ${\xi }_{1}={\xi }_{2}=1$, and use the modified Newton method to solve ${F}_{c}\left(x\right)=0$.

Algorithm 3. (Modified Smoothing Modulus-based Newton method)

Step 1: Given $\epsilon >0$, $c>0$, set $k=0$.

Step 2: Get the initial vector ${x}^{0}$ through Algorithm 1.

Step 3: Computing ${F}_{c}\left({x}^{k}\right)$, ${{F}^{\prime }}_{c}\left({x}^{k}\right)$.

Step 4: Use (6) compute ${x}^{k+1}$, and

${z}^{k+1}=\beta \left(|{x}^{k+1}|+{x}^{k+1}\right)+m\left({z}^{k}\right)$.

Step 5: If ${‖{x}^{k+1}-{x}^{k}‖}_{2}<\epsilon$, then stop and output

$z=\beta \left(|{x}^{k+1}|+{x}^{k+1}\right)+m\left({z}^{k+1}\right)$,

$w=\alpha \left(|{x}^{k+1}|-{x}^{k+1}\right)$.

Otherwise, let ${x}^{k}={x}^{k+1}$, ${z}^{k}={z}^{k+1}$, $k=k+1$, return to Step 2.

On the basis of the modified Newton method, we do $m+1$ -step iterations in the iterative process, and then construct a smoothing modulus-based $m+1$ -step iteration method to solve the implicit complementarity problem, and the iterative sequence is as follows:

$\left\{\begin{array}{l}{y}_{1}^{k}={x}^{k}±{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}F\left({x}^{k}\right)\\ {y}_{2}^{k}={y}_{1}^{k}-{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}F\left({y}_{1}^{k}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {y}_{m}^{k}={y}_{m-1}^{k}-{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}F\left({y}_{m-1}^{k}\right)\\ {x}^{k+1}={y}_{m}^{k}-{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}F\left({y}_{m}^{k}\right)\end{array}$ (7)

we use the $m+1$ -step iterative method to solve ${F}_{c}\left(x\right)=0$.

Algorithm 4. (Smoothing Modulus-Based $m+1$ -step Iterative Method)

Step 1: Given $\epsilon >0$, $c>0$, set $k=0$.

Step 2: Get the initial vector ${x}^{0}$ through Algorithm 1.

Step 3: Computing ${F}_{c}\left({x}^{k}\right)$, ${{F}^{\prime }}_{c}\left({x}^{k}\right)$.

Step 4: Use (7) compute ${x}^{k+1}$, and

${z}^{k+1}=\beta \left(|{x}^{k+1}|+{x}^{k+1}\right)+m\left({z}^{k}\right)$.

Step 5: If ${‖{x}^{k+1}-{x}^{k}‖}_{2}<\epsilon$, then stop and output

$z=\beta \left(|{x}^{k+1}|+{x}^{k+1}\right)+m\left({z}^{k+1}\right)$,

$w=\alpha \left(|{x}^{k+1}|-{x}^{k+1}\right)$.

Otherwise, let ${x}^{k}={x}^{k+1}$, ${z}^{k}={z}^{k+1}$, $k=k+1$, return to Step 2.

3. Convergence Theorem

In this section, we give the convergence of the above algorithms.

Theorem 1. If ${x}^{\ast }$ is the solution of the system of equations ${F}_{c}\left(x\right)=0$, the iterative sequence $\left\{{x}_{c}^{k}\right\}$ is second-order convergent in a neighborhood of ${x}_{c}^{\ast }$.

Proof. According to the convergence theorem of Newton method, it can be obtained that the iterative sequence generated by Algorithm 2 converges to the solution ${x}^{\ast }$ of the equation ${F}_{c}\left(x\right)=0$ with the order two in a neighborhood of ${x}_{c}^{\ast }$.

Theorem 2. For nonlinear vector functions ${F}_{c}\left(x\right)=0$, define the sequence:

$\left\{\begin{array}{l}{y}^{k}={x}^{k}±{\xi }_{1}{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}{F}_{c}\left({x}^{k}\right),\\ {x}^{k+1}={y}^{k}-{\xi }_{2}{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}{F}_{c}\left( y k \right)\end{array}$

if ${x}^{\ast }$ is the solution of the system of equations ${F}_{c}\left(x\right)=0$, when ${\xi }_{1}=±1$, ${\xi }_{2}=1$, Algorithm 3 converges to ${x}^{\ast }$ with order three.

Proof The Taylor expansion of the function ${F}_{c}\left({x}^{k}\right)$ at ${x}^{\ast }$ :

${F}_{c}\left({x}^{k}\right)={{F}^{\prime }}_{c}\left({x}^{\ast }\right)\left({x}^{k}-{x}^{\ast }\right)+\frac{1}{2}{{F}^{″}}_{c}\left({x}^{\ast }\right){\left({x}^{k}-{x}^{\ast }\right)}^{2}+ο\left({‖{x}^{k}-{x}^{\ast }‖}^{3}\right).$ (8)

For (8), let ${r}^{k}={x}^{k}-{x}^{\ast }$, we have

${F}_{c}\left({x}^{k}\right)={{F}^{\prime }}_{c}\left({x}^{\ast }\right)\left({r}^{k}+\frac{1}{2}{{F}^{″}}_{c}\left({x}^{\ast }\right){\left({r}^{k}\right)}^{2}+ο\left({‖{r}^{k}‖}^{3}\right)\right),$ (9)

${{F}^{\prime }}_{c}\left({x}^{k}\right)=A={{F}^{\prime }}_{c}\left({x}^{\ast }\right)\left[I+{{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}{{F}^{″}}_{c}\left({x}^{\ast }\right){r}^{k}+ο\left({\left({r}^{k}\right)}^{2}\right)\right].$ (10)

Then

${A}^{-1}=\left[I-{{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}{{F}^{″}}_{c}\left({x}^{\ast }\right){r}^{k}+ο\left({\left({r}^{k}\right)}^{2}\right)\right]{{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}.$ (11)

Let ${r}_{y}^{k}={y}^{k}-{x}^{\ast }$, According to (11), that

${r}_{y}^{k}={y}^{k}-{x}^{\ast }-{\xi }_{1}{A}^{-1}{{F}^{\prime }}_{c}\left({x}^{k}\right)=\left(1-{\xi }_{1}\right){r}^{k}+\frac{{\xi }_{1}}{2}{{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}{{F}^{″}}_{c}\left({x}^{\ast }\right){\left({r}^{k}\right)}^{2}+ο\left({\left({r}^{k}\right)}^{3}\right).$ (12)

$\begin{array}{l}A{r}_{y}^{k}={{F}^{\prime }}_{c}\left({x}^{\ast }\right)\cdot \left[I+{{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}{{F}^{″}}_{c}\left({x}^{\ast }\right){r}^{k}+ο\left({\left({r}^{k}\right)}^{2}\right)\right]\left(1-{\xi }_{1}\right){r}^{k}\\ \text{}+\frac{{\xi }_{1}}{2}{{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}{{F}^{″}}_{c}\left({x}^{\ast }\right){\left({r}^{k}\right)}^{2}+ο\left({\left({r}^{k}\right)}^{3}\right).\end{array}$ (13)

From (8),

${F}_{c}\left({y}^{k}\right)={{F}^{\prime }}_{c}\left({x}^{\ast }\right)\cdot \left({r}_{y}^{k}+\frac{1}{2}{{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}{{F}^{″}}_{c}\left({x}^{\ast }\right){\left({r}_{y}^{k}\right)}^{2}+ο\left({\left({r}_{y}^{k}\right)}^{3}\right)\right).$ (14)

On the other hand

${r}_{y}^{k+1}={x}^{k+1}-{x}^{\ast }={y}^{k}-{x}^{\ast }-{\xi }_{2}{A}^{-1}{F}_{c}\left({y}^{k}\right).$ (15)

Equivalent to

$A{r}_{y}^{k+1}=A{r}_{y}^{k}-{\xi }_{2}{F}_{c}\left({y}^{k}\right).$ (16)

Applying formula (11), (13) and (14), we have

$A{r}_{y}^{k+1}={{F}^{\prime }}_{c}\left({x}^{\ast }\right)\cdot \left(I+{{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}{{F}^{″}}_{c}\left({x}^{\ast }\right){r}^{k}+ο\left({\left({r}^{k}\right)}^{2}\right)\right){r}^{k+1},$ (17)

$\begin{array}{c}A{r}_{y}^{k}-{\xi }_{2}{F}_{c}\left({y}^{k}\right)={{F}^{\prime }}_{c}\left({x}^{\ast }\right)\cdot \left(\left(1-{\xi }_{1}\right)\left(1-{\xi }_{2}\right){r}_{y}^{k}+\left(1-\frac{{\xi }_{1}}{2}-{\xi }_{2}\left(\frac{1-{\xi }_{1}+{\xi }_{1}{}^{2}}{2}\right)\right)\\ \text{ }\begin{array}{c}\text{ }\\ \text{ }\end{array}\cdot {{F}^{\prime }}_{c}{\left({x}^{\ast }\right)}^{-1}{{F}^{″}}_{c}\left({x}^{\ast }\right){\left({r}^{k}\right)}^{2}+ο\left({\left({r}^{k}\right)}^{3}\right)\right).\end{array}$ (18)

In order to get ${r}^{k+1}=ο\left({\left({r}^{k}\right)}^{3}\right)$, the constants ${\xi }_{1}$ and ${\xi }_{2}$ must satisfy:

$\left\{\begin{array}{l}\left(1-{\xi }_{1}\right)\left(1-{\xi }_{2}\right)=0\\ 1-\frac{{\xi }_{1}}{2}-{\xi }_{2}\left(\frac{1-{\xi }_{1}+{\xi }_{1}{}^{2}}{2}\right)=0\end{array},$ (19)

by solving the above equations, we can get ${\xi }_{1}=±1,\text{\hspace{0.17em}}{\xi }_{2}=1$. The theorem is proved.

Theorem 3. If ${x}^{\ast }$ is the solution of the system of equations ${F}_{c}\left(x\right)=0$, Sequence

$\left\{\begin{array}{l}{y}_{1}^{k}={x}^{k}±{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}F\left({x}^{k}\right)\\ {y}_{2}^{k}={y}_{1}^{k}-{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}F\left({y}_{1}^{k}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {y}_{m}^{k}={y}_{m-1}^{k}-{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}F\left({y}_{m-1}^{k}\right)\\ {x}^{k+1}={y}_{m}^{k}-{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}F\left({y}_{m}^{k}\right)\end{array}$ (20)

for any positive integer m, the sequence produced by Algorithm 4 is converges to ${x}^{\ast }$ with order $m+2$.

Proof. We use the mathematical induction method to prove it. Obviously, when $m=1$, the sequence is third-order convergent from Theorem 2, and the theorem holds. Then we assume that $m-1\ge 1$ holds, we have

${y}_{m-1}^{k}-{x}^{\ast }-{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}{F}_{c}\left({y}_{m-1}^{k}\right)=ο\left({\left({r}^{k}\right)}^{m+1}\right)$. (21)

The following proves that holds for m, namely

${r}^{k+1}={y}_{m}^{k+1}-{x}^{\ast }-{{F}^{\prime }}_{c}{\left({x}^{k}\right)}^{-1}{F}_{c}\left({y}_{m-1}^{k}\right)=ο\left({\left({r}^{k}\right)}^{m+2}\right)$. (22)

Combine formula (20) and assumption (21), we can get

${r}_{m}^{k}={y}_{m}^{k}-{x}^{\ast }=ο\left({\left({r}^{k}\right)}^{m+1}\right)$, (23)

${{F}^{\prime }}_{c}\left({x}^{k}\right){r}^{k+1}={{F}^{\prime }}_{c}\left({x}^{k}\right){r}^{k}-{F}_{c}\left({y}_{m}^{k}\right)$. (24)

Using the expansion of ${{F}^{\prime }}_{c}\left({x}^{k}\right)$ and ${F}_{c}\left({y}_{m}^{k}\right)$ at ${x}^{\ast }$, we obtain

${{F}^{\prime }}_{c}\left({x}^{\ast }\right)\left[I+ο\left({r}^{k}\right)\right]{r}^{k+1}={{F}^{\prime }}_{c}\left({x}^{\ast }\right)\left[I+ο\left({r}^{k}\right)\right]{r}_{m}^{k}-{{F}^{\prime }}_{c}\left({x}^{\ast }\right){r}_{m}^{k}-ο\left({\left({r}_{m}^{k}\right)}^{2}\right)$. (25)

From (24), then we got ${r}^{k+1}=ο\left({\left({r}^{k}\right)}^{m+2}\right)$. The theorem is proved.

4. Numerical Results

In this section, we use numerical examples to examine the numerical effectiveness of smoothing modulus-based iterative methods from aspects of number of iteration steps (denoted by “IT”), elapsed CPU time in seconds (denoted by “CPU”), and norm of absolute residual vectors (denoted by “RES”). RES is defined as:

$\text{RES}=abs\left({\left(M{z}^{k}+q\right)}^{\text{T}}\left({z}^{k}-m\left({z}^{k}\right)\right)\right)$,

where ${z}^{k}$ is the kth approximate solution to the ICP.

In this paper, all calculations are run on a machine with a CPU of 1.8 Hz and a memory of 8G, and the programming language is MATLAB (2018b). We choose the ${z}^{0}={\left(0,0,\cdots ,0,0\right)}^{\text{T}}$, large positive integer $c=30$, ${\xi }_{1}={\xi }_{2}=1$, $\epsilon ={10}^{-6}$, $\alpha =\beta =1$.

We will compare our smoothing modulus-based iterative method with accelerated modulus-based matrix splitting iteration methods, and its iteration coefficient is 1.6 . The abbreviations of methods are listed in Table 1.

Example 4.1. Let p is a positive integer, $n={p}^{2}$, consider the implicit complementarity problem (1), when $M\in {R}^{n×n}$ is a tridiagonal block matrix, q is a vector

$M=\left(\begin{array}{ccccc}S& -I& 0& \cdots & 0\\ -I& S& \ddots & \ddots & ⋮\\ 0& \ddots & \ddots & \ddots & 0\\ ⋮& \ddots & \ddots & S& -I\\ 0& \cdots & 0& -I& S\end{array}\right)\in {R}^{n×n}$, $q=\left(\begin{array}{c}-1\\ 1\\ ⋮\\ {\left(-1\right)}^{n-1}\\ {\left(-1\right)}^{n}\end{array}\right)\in {R}^{n}$

where $S=tridiag\left(-1,4,-1\right)\in {R}^{p×p}$ is a tridiagonal matrix, $I\in {R}^{p×p}$ is an identity matrix, the point-to-point mapping $m\left(z\right)={\left(\sqrt{{z}_{1}},\sqrt{{z}_{2}},\cdots ,\sqrt{{z}_{n}}\right)}^{\text{T}}$. The numerical results are listed in Table 2.

Example 4.2. Let p is a positive integer, $n={p}^{2}$, consider the implicit complementarity problem (1), when $M\in {R}^{n×n}$ is a tridiagonal block matrix, q is a vector

$M=\left(\begin{array}{ccccc}S& -0.5I& 0& \cdots & 0\\ -1.5I& S& \ddots & \ddots & ⋮\\ 0& \ddots & \ddots & \ddots & 0\\ ⋮& \ddots & \ddots & S& -0.5I\\ 0& \cdots & 0& -1.5I& S\end{array}\right)\in {R}^{n×n}$, $q=\left(\begin{array}{c}-1\\ 1\\ ⋮\\ {\left(-1\right)}^{n-1}\\ {\left(-1\right)}^{n}\end{array}\right)\in {R}^{n}$

where $S=tridiag\left(-1.5,4,-0.5\right)\in {R}^{p×p}$ is a tridiagonal matrix, $I\in {R}^{p×p}$ is an identity matrix, the point-to-point mapping $m\left(z\right)={\left(\mathrm{arctan}\left({z}_{1}\right),\mathrm{arctan}\left({z}_{2}\right),\cdots ,\mathrm{arctan}\left({z}_{n}\right)\right)}^{\text{T}}$. The numerical results are listed in Table 3.

From Table 2 and Table 3, for different problem of size n, we list the iteration steps, the CPU time and the residual norms with respect to AMSOR, SMN, MSMN and SM(m + 1) methods. It can be seen that all methods converge quickly. Among these methods, SMN, MSMN and SM(m + 1) methods require

Table 1. Abbreviations of methods.

Table 2. Numerical results of Example 4.1.

Table 3. Numerical results of Example 4.2.

less iteration steps and cost less computing time than AMSOR method, and SM(m + 1) is best.

Example 4.3. Let p is a positive integer, $n={p}^{2}$, consider the implicit complementarity problem (1), when $M\in {R}^{n×n}$ is a tridiagonal block matrix, q is a vector

$M=\left(\begin{array}{ccccc}S& -I& 0& \cdots & 0\\ -I& S& \ddots & \ddots & ⋮\\ 0& \ddots & \ddots & \ddots & 0\\ ⋮& \ddots & \ddots & S& -I\\ 0& \cdots & 0& -I& S\end{array}\right)\in {R}^{n×n}$, $q=\left(\begin{array}{c}-1\\ 1\\ ⋮\\ {\left(-1\right)}^{n-1}\\ {\left(-1\right)}^{n}\end{array}\right)\in {R}^{n}$

where $S=tridiag\left(-1,4,-1\right)\in {R}^{p×p}$ is a tridiagonal matrix, $I\in {R}^{p×p}$ is an identity matrix, the point-to-point mapping $m\left(z\right)={\left({z}_{1}^{3},{z}_{1}^{3},\cdots ,{z}_{n}^{3}\right)}^{\text{T}}$. The numerical results are listed in Table 4.

Example 4.4. Let p is a positive integer, $n={p}^{2}$, consider the implicit complementarity problem (1), when $M\in {R}^{n×n}$ is a tridiagonal block matrix, q is a vector

Table 4. Numerical results of Example 4.3.

Table 5. Numerical results of Example 4.4.

$M=\left(\begin{array}{ccccc}S& -0.5I& 0& \cdots & 0\\ -1.5I& S& \ddots & \ddots & ⋮\\ 0& \ddots & \ddots & \ddots & 0\\ ⋮& \ddots & \ddots & S& -0.5I\\ 0& \cdots & 0& -1.5I& S\end{array}\right)\in {R}^{n×n}$, $q=\left(\begin{array}{c}-1\\ 1\\ ⋮\\ {\left(-1\right)}^{n-1}\\ {\left(-1\right)}^{n}\end{array}\right)\in {R}^{n}$

where $S=tridiag\left(-1.5,4,-0.5\right)\in {R}^{p×p}$ is a tridiagonal matrix, $I\in {R}^{p×p}$ is an identity matrix, the point-to-point mapping $m\left(z\right)={\left({z}_{1}^{3},{z}_{1}^{3},\cdots ,{z}_{n}^{3}\right)}^{\text{T}}$. The numerical results are listed in Table 5.

From Table 4 and Table 5, for the symmetric and asymmetric problem, we list the iteration steps, the CPU time and the residual norms with respect to SMN, MSMN and SM(m + 1) methods respectively, among the three algorithm, SM(m + 1) has the least number of iteration steps, costs the least CPU time, and holds better error accuracy.

5. Conclusion

In this paper, a class of smoothing modulus-based iteration method for solving implicit complementarity problems is proposed, the convergence of the algorithm is analyzed, and numerical experiments show the effectiveness of the method.

Acknowledgements

This work was supported by Natural Science Foundation of China (11661027), the Guangxi Natural Science Foundation (2020GXNSFAA159143), and the Innovation Project of GUET Graduate Education (2021YCXS114).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

  Ferris, M.C. and Pang, J.S. (1997) Engineering and Economic Applications of Complementarity Problems. SIAM Review, 39, 669-713. https://doi.org/10.1137/S0036144595285963  Pang, J.S. (1981) The Implicit Complementarity Problem. Academic Press, New York.  Bensoussan, A. and Lion, J.L. (1973) Nouvelle Formulation de Problèmes de Contrôle Impulsionnel et Applications. Comptes rendus de l’Académie des Sciences, 276, 1189-1192.  Mosco, U. (1980) On Some Nonlinear Quasi Variational Inequalities and Implicit Complementarity Problems in Stochastic Control Theory. In: Cottle, R.W., Gianessi, F. and Lions, J.L., Eds., Variational Inequalities and Complementarity Problems, Wiley, New York, 271-283.  Noor, M.A. (1988) Fixed Point Approach for Complementarity Problems. Journal of Mathematical Analysis and Applications, 133, 437-448. https://doi.org/10.1016/0022-247X(88)90413-1  Ahmad, K., Kazmi, K.R. and Rehman, N. (1997) Fixed-Point Technique for Implicit Complementarity Problem in Hilbert Lattice. Journal of Optimization Theory and Applications, 93, 67-72. https://doi.org/10.1023/A:1022645716916  Pang, J.S. (1982) On the Convergence of a Basic Iterative Method for the Implicit Complementarity Problems. Journal of Optimization Theory and Applications, 37, 149-162. https://doi.org/10.1007/BF00934765  Bai, Z.Z. (1996) On the Monotone Convergence of the Projected Iteration Methods for Linear Complementarity Problems. Numerical Mathematics, 5, 228-233.  Zhan, W.P., Zhou, S.Z. and Zeng, J.P. (2000) Iterative Methods for a Kind of Quasi-Complementarity Problems. Acta Mathematicae Applicatae Sinica, 23, 551-556.  Zheng, H. and Qu, W. (2020) Superlinearly Convergent Methods for Solving a Class of Implicit Complementarity Problems Based on Sign Analysis. Japan Journal of Industrial and Applied Mathematics, 37, 433-447. https://doi.org/10.1007/s13160-020-00405-3  Tian, B.S., Li, D.H. and Yang, X.Q. (2016) An Unconstrained Differentiable Penalty Method for Implicit Complementarity Problems. Optimization Methods and Software, 31, 775-790. https://doi.org/10.1080/10556788.2016.1146266  Xia, Z.C. and Li, C.L. (2015) Modulus-Based Matrix Splitting Iteration Methods for a Class of Nonlinear Complementarity Problem. Applied Mathematics and Computation, 271, 34-42. https://doi.org/10.1016/j.amc.2015.08.108  Hong, J.T. and Li, C.L. (2016) Modulus-Based Matrix Splitting Iterative Methods for a Class of Implicit Complementarity Problems. Numerical Linear Algebra with Application, 23, 629-641. https://doi.org/10.1002/nla.2044  Wang, A., Cao, Y. and Shi, Q. (2018) Convergence Analysis of Modulus-Based Matrix Splitting Iterative Methods for Implicit Complementarity Problems. Journal of Inequalities and Applications, 2018, Article No. 2. https://doi.org/10.1186/s13660-017-1593-7  Yin, J.F., Ding, J. and Li, R. (2020) Accelerated Modulus-Based Matrix Splitting Iteration Methods for a Class of Implicit Complementarity Problems. Journal of Tongji University: Natural Science, 48, 1478-1486.  Zheng, H. and Vong, S. (2019) A Modified Modulus-Based Matrix Splitting Iteration Method for Solving Implicit Complementarity Problems. Numerical Algorithms, 82, 572-592. https://doi.org/10.1007/s11075-018-0614-z  Cao, Y. and Wang, A. (2019) Two-step Modulus-Based Matrix Splitting Iteration Methods for Implicit Complementarity Problems. Numerical Algorithms, 82, 1377-1394. https://doi.org/10.1007/s11075-019-00660-7  Dong, J.L. and Jiang, M.Q. (2009) A Modified Modulus Method for Symmetric Positive-Definite Linear Complementarity Problems. Numerical Linear Algebra with Application, 16, 129-143. https://doi.org/10.1002/nla.609  EL Foutayeni, Y., EL Bouanani, H. and Khaladi, M. (2017) An (m+1)-Step Iterative Method of Convergence Order (m+2) for Linear Complementarity Problems. Journal of Applied Mathematics and Computing, 54, 229-242. https://doi.org/10.1007/s12190-016-1006-y     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 