Hybrid Extragradient-Type Methods for Finding a Common Solution of an Equilibrium Problem and a Family of Strict Pseudo-Contraction Mappings

This paper proposes a new hybrid variant of extragradient methods for finding a common solution of an equilibrium problem and a family of strict pseudo-contraction mappings. We present an algorithmic scheme that combine the idea of an extragradient method and a successive iteration method as a hybrid variant. Then, this algorithm is modified by projecting on a suitable convex set to get a better convergence property. The convergence of two these algorithms are investigated under certain assumptions.


Introduction
Let be a real Hilbert space endowed with an inner product

H
,   and a norm  associated with this inner product, respectively.Let C be a nonempty closed convex subset of , and f be a bifunction from C × C to R such that f(x, x) = 0 for all

H
x C  .An equilibrium problem in the sense of Blum and Oettli [1] is stated as follows: Problem of the form (1) on one hand covers many important problems in optimization as well as in nonlinear analysis such as (generalized) variational inequality, nonlinear complementary problem, nonlinear optimization problem, just to name a few.On the other hand, it is rather convenient for reformulating many practical problems in economic, transportation and engineering (see [1,2] and the references quoted therein).
Alternatively, the problem of finding a common fixed point element of a finite family of self-mappings is expressed as follows: Find such that Fix , , where Fix(S i , C) is the set of the fixed points of the mapping S i (i = 1, •••, p).Problem of finding a fixed point of a mapping or a family of mappings is a classical problem in nonlinear analysis.The theory and solution methods of this problem can be found in many research papers and mono-graphs (see [9]).
Let us denote by Sol(f, C) and  the solution sets of the equilibrium problem (1) and the fixed-point problem (2), respectively.Our aim in this paper is to address to the problem of finding a common solution of two problems (1) and (2).Typically, this problem is stated as follows: Our motivation originates from the following observations.Problem (3) can be on one hand considered as an extension of problem (1) by setting S i = I for all i = 1, •••, p, the identity mapping.On the other hand it is significant in many practical problems.Since the equi-librium problems have found many applications in economic, transportation and engineering, in some practical problems it may happen that the feasible set of those problems result as a fixed point solution set of one or many fixed point problems.In this case, the obtained problem can be reformulated in the form of (3).An important special case of problem (3) and this problem is reduced to finding a common element of the solution set of variational inequalities and the solution set of a fixed point problem (see [10][11][12]).
In this paper, we propose a new hybrid iterative-based method for solving problem (3).This method can be considered as an improvement of the viscosity approximation method in [11] and iterative methods in [13].The idea of the algorithm is to combine the extragradient-type methods proposed in [14] and a fixed point iteration method.Then, the algorithm is modified by projecting on a suitable convex set to obtain a new variant which possesses a better convergence property.
The rest of the paper is organized as follows.Section 2 recalls some concepts and results in equilibrium problems and fixed point problems that will be used in the sequel.Section 3 presents two algorithms for solving problem (3) and some discussion on implementation.Section 4 investigates the convergence of the algorithms presented in Section 3 as the main results of our paper.

Preliminaries
Associated with the equilibrium problem (1), the following definition is common used as an essential concept (see [3]).
, , , , It is clear that every monotone bifunction f is pseudo-monotone.Note that the Lipschitz continuous condition (4) was first introduced by Mastroeni in [7].
The concept of strict pseudo-contraction is considered in [15], which defined as follows.

Definition 2.2. Let C be a nonempty closed convex subset of a real Hilbert space
. A mapping S: C → C is said to be a strict pseudo-contraction if there exists a constant 0 ≤ L < 1 such that , ,

S x S y x y L I S x I S y x y C
where I is the identity mapping on .If L = 0 then S is called non-expansive on C.

H
The following proposition lists some useful properties of a strict pseudo-contraction mapping.
Proposition 2.3.[15] Let C be a nonempty closed convex subset of a real Hilbert space , S: C → C be an L-strict pseudo-contraction and for each Before presenting our main contribution, let us briefly look at the recently literature related to the methods for solving problem (3).In [11], S. Takahashi and W. Takahashi proposed an iterative scheme under the name viscosity approximation methods for finding a common element of set of solutions of (1) and the set of fixed points of non-expansive mapping S in a real Hilbert space .This method generated an iteration sequence {x k } starting from a given initial point and computed x k + 1 as where g is a contraction of into itself, the sequences of parameters {r k } and {α k } were chosen appropriately.Under certain choice of {α k } and {r k }, the authors showed that two iterative sequences {x k } and {u k } converged strongly to where Pr C denotes the projection onto C.
Alternatively, the problem of finding a common fixed point of a finite sequence of mappings has been studied by many researchers.For instance, Marino and Xu in [7] proposed an iterative algorithm for finding a common fixed point of p strict pseudo-contraction mapping S i (i = 1, •••, p).The method computed a sequence {x k } starting from and taking: where the sequence of parameters {λ k } was chosen in a specific way to ensure the convergence of the iterative sequence {x k }.The authors showed that the sequence {x k } converged weakly to the same point Recently, Chen et al. in [13] proposed a new iterative scheme for finding a common element of the set of common fixed points of a strict pseudo-contraction and the set of solutions of the equilibrium problem (1) in a real Hilbert space .This method is briefly described as follows.Given a starting point and generates three iterative sequences {x k }, {y k } and {z k } using the following scheme: where := .
Here, two sequences {α k } and {r k } are given as control parameters.Under certain conditions imposed on {α k } and {r k }, the authors showed that the sequences {x k }, {y k } and {z k } converged strongly to the same point x * such that  for all x C  .The solution methods for finding a common element of the set of solutions of (1) and  in a real Hilbert space have been recently studied in many research papers (see [8,12,[16][17][18][19][20][21][22][23]).Throughout those papers, there are two essential assumptions on the function f have been used: monotonicity and Lipschitztype continuity.
In this paper, we continue studying problem (3) by proposing a new iterative-based algorithm for finding a solution of (3).The essential assumptions that will be used in our development includes: pseudo-monotonicity and Lipschitz-type continuity of the bifunction f and the strict pseudo-contraction of S i (i = 1, •••, p).The algorithm is then modified to obtain a new variant which has a better convergence property.

New Hybrid Extragradient Algorithms
In this section we present two algorithms for finding a solution of problem (3).Before presenting algorithmic schemes, we recall the following assumptions that will be used to prove the convergence of the algorithms.
Assumption 3.1.The bifunction f satisfies the following conditions: 1) f is pseudo-monotone and continuous on C; Note that if where The first algorithm is now described as follows.

Set and go back to
Step The main task of Algorithm 3.4 is to solve two strongly convex programming problems at Step 1.Since these problems are strongly convex and C is nonempty, they are uniquely solvable.To terminal the algorithm, we can use the condition on step size by checking Step 1 of Algorithm 3.4 is known as an extragradient-type step for solving equilibrium problem (1) proposed in [14].Step 2 is indeed the iteration (7) of the iterative method proposed in [24].
As we will see in the next section, Algorithm 3.4 generates a sequence {x k } that converges weakly to a solution x * of (3).Recently, a modification of Mann's algorithm for finding a common element of p strict pseudo-contractions was proposed [15].The authors proved that this algorithm converged strongly to a common fixed point of the p strict pseudo-contractions.In the next algorithm, we extended the algorithm in [15] for finding a common solution of the set of common fixed points of p strict pseudo-contractions and the equilibrium problems to obtain a strongly convergence algorithm.This algorithm is similar to the iterative scheme (8), where an augmented step will be added to Algorithm 3.4 and obtain a new variant of Algorithm 3.4.
For a given closed, convex set X in and H x  H , Pr X (x) denotes the projection of x onto X.The algorithm is described as follows.
Algorithm 3.5 Initialization: Given a tolerance ε > 0. Choose positive sequences {λ k }, {λ k, i } and {α k } satisfy the conditions (10) and the following addition condition: Find an initial point 0 x C  .Set .:= 0 k Iteration k: Perform the three steps below: Step 1: Solve two strongly convex programs: Step 2: Step 3: Compute   where   The augmented step needed in Algorithm 3.5 is a simple projection on the intersect of two half-planes.The projection is cheap to compute in implementation.

The Convergence of the Algorithms
This section investigates the convergence of the algorithms 3.4 and 3.5.For this purpose, let us recall the following technical lemmas which will be used in the sequel.
Lemma 4.1 [25].Let C be a nonempty closed convex subset of a real Hilbert space .Suppose that, for all converges strongly to some x C  .Using the well-known necessary and sufficient condition for optimality in convex programming, we have the following result.
Lemma 4.2 [2].Let C be a nonempty closed convex subset of a real Hilbert space and g: C → R be subdifferentiable on C. Then x * is a solution to the following convex problem denotes the subdifferential of g and is the (outward) normal cone of C at  .The next lemma is regarded to the property of a projection mapping.
Lemma 4.3 [9].Let C be a nonempty closed convex subset of a real Hilbert space and .Let the sequence {x k } be bounded such that every weakly cluster point Then x k converges strongly to Pr C (x 0 ) as k → ∞.Lemma 4.4 (see [14], Lemma 3.1).Let C be a nonempty closed convex subset of a real Hilbert space . Let be a pseudomonotone, Lipschitz-type continuous bifunction with constants c 1 > 0 and c 2 > 0. For each  , let   , f x  be convex and subdifferentiable on C. Suppose that the sequences {x k }, {y k }, {t k } generated by Scheme (13) and Now, we prove the main convergence theorem.Fix , Sol , Proof.The proof of this theorem is divided into several steps.
Step 1.We claim that Indeed, for each k ≥ 1, we denote by By statement 4) of Proposition 2.3, we see that k S is a L -strict pseudo-contraction on C and then Then, using Lemma 4.4 and the relation x y Therefore, there exists Similarly, we also have Then, since we also have Step 2. We show that It follows from ( 17) that Combining this and Lemma 4.4, we get )( ) Then, using (a) of Proposition 2.3 and Step 2, we obtain In Step 3 and Step 4 of this theorem, we will consider weakly clusters of {x k }.It follows from (19) that the sequence {x k } unded and hence there exists a subsequence   k j x converges weakly to x as j → ∞.
we obtain that By 2) of Proposition 2.3, we have Then, it implies from 5) of Proposition 2.3 that Step 4. When and Lemma 4.2, we have . By the definition of the normal cone N C we imply that , , On the other hand, since is subdifferentiable on C, by the well known Moreau-Rockafellar theorem, there exists Combining this with (20), we have Then, using Step 2, as j → ∞ and continuity of f, we have Step 5. We claim that the sequences {x k }, {y k } and {t k } convergence weakly to x as k → ∞, where ose that there exists two sequences s bound we supp

By Step 3 and
Step 4, we have Then, using = lim and the following equality (see [7]) Hence, we have = x x .This implies that   =1 Fix( , ) Sol , as .
Now, we suppose that as k → ∞.By the definition of Pr C (•), we have Fix , Sol , .


Then, by Lemma 4.1, we have Pass the limit in (21) and combining this with (22), we have Step 2 that the sequences {x k }, {y k } and {t k } converge weakly to x as k → ∞, where


The proof is completed. The next theorem proves the strong convergence of Algorithm 3.5.Proof.We also divide the proof of this theorem into several steps.
Step 1.We claim that This means that * k x P  and hence by induction.For k = 0, we have  by the induction assumption and hence Hence (23) holds for all k ≥ 0.
Step 2. We show that Then, we have Using this and the following equality Since is subdifferentiable on C, by the well known Moreau-Rockafellar theorem, there exists such that By the definition of the normal cone N C we have, from the latter inequality, that Step 3. We show that the sequence {x k } converges strongly to x * , where

Theorem 4 . 5 .
Suppose that Assumptions 3.1-3.3are satisfied.Then the sequences {x k }, {y k } and {z k } generated by Algorithm 3.4 converge weakly to the same point

Theorem 4 . 6 .
Suppose that Assumptions 3.1-3.3are satisfied.Then the sequences {x k } and {y k } generated by Algorithm 3.5 converge strongly to the same point x * ,

.By Lemma 4 . 3 ,
inStep 4 and  Step 5 of the proof of Theorem 4.5, we can claim that for every weakly cluster point x of the sequence {x k } satisfies On the other hand, using the definition of Q k , we have we claim that the sequence {x k } converges strongly to x * as k → ∞, where also have y k → x * as k → ∞. 