Convergence Rate Analysis of Modified BiG-SAM for Solving Bi-Level Optimization Problems Based on S-FISTA

Abstract

In this paper, we consider a more general bi-level optimization problem, where the inner objective function is consisted of three convex functions, involving a smooth and two non-smooth functions. The outer objective function is a classical strongly convex function which may not be smooth. Motivated by the smoothing approaches, we modify the classical bi-level gradient sequential averaging method to solve the bi-level optimization problem. Under some mild conditions, we obtain the convergence rate of the generated sequence, and then based on the analysis framework of S-FISTA, we show the global convergence rate of the proposed algorithm.

Share and Cite:

Xiaoyin, N. and Yang, L. (2025) Convergence Rate Analysis of Modified BiG-SAM for Solving Bi-Level Optimization Problems Based on S-FISTA. Journal of Applied Mathematics and Physics, 13, 1555-1576. doi: 10.4236/jamp.2025.134084.

1. Introduction

In this paper, we mainly consider the bi-level optimization problems, which is derived from the Stackelberg game in game theorey. Bi-level optimization problems is a special kind of optimization problem that involves two levels, called outer level and inner level. This structure means that the goals and constraints of the outer level problem depend on the solution of the inner problem. Bi-level optimization problems have a wide range of applications in many fields, including economics, engineering design, transportation planning, machine learning and so on.

Recall that the classical bi-level optimization problem, where the outer and inner objective functions are convex. The outer objective function is a constrained minimization problem, it is

min x X * ω( x ), (OP)

where X * is the set of minimizers of the inner objective function, which is a composite convex minimization problem, as follows,

min x n { Φ( x ):=f( x )+g( x ) }. (P1)

In this case, ω is a strong convex and differentiable function, f is a convex and continuously differentiable function and g is an extended real-valued function on n . Here, g maybe is a nonsmooth function. Problem (OP)-(P1) is called simple bi-level optimization in [1], which is opposed to the more general version of the problem, see in [2].

Note that, both inner problem (P1)and outer problem (OP) are classical convex optimization problems, which can be solved in different cases, by projected gradient, proximal gradient algorithm, forward-backward splitting algorithm, and so on. However, if we combine problem (OP) and (P1) together, it is difficult to handle.

For problem (OP)-(P1), we can solve it directly or indirectly. In general, we can transform the bi-level optimization problems into a simple optimization problem structure. Then, it can be solved indirectly and easily. The common method for solving the classical bi-level optimization problem is Tikhonov regularization [3], it is for some θ>0 , solving the following regularized problem:

min x n { Φ θ ( x ):=Φ( x )+θω( x ) }. (1.1)

Problem (OP) and (P1) can be traced back to the work of Managsarian and Meyer [4] in the process of developing efficient algorithms for large scale linear programs. They proposed a modification of the Tikhonov regularization technique [3], the underlying idea is called finite-perturbation property. It is that finding a parameter θ * (Tikhonov perturbation parameter) such that for all θ[ 0, θ * ] ,

arg min x X * ω( x )=arg min x n Φ θ ( x ).

This property is proven by Managsarian initially, when the inner level problem is a linear program. Then, it was extended by Ferris et al. [5], where the inner objective problem is general convex optimization problem.

In [5], they considered the case that g is an indicator function of a closed convex set C , and under some restrictions, they demonstrated that the optimal solution of problem (1.1) is the optimal solution of problem (OP), when there exists a small enough θ * >0 . In practice, the value of θ * is unknown, it means that solving problem (1.1) should depend on a sequence of regularizing parameters { θ n } , where θ n 0 as n+ . Solodov [6] showed that for n=1 θ n = , there is no need to find the optimal solution of problem (1.1) with indicator θ n . He proposed an explicit and more tractable proximal point method for the bi-level optimization problem (OP)-(P1), which is opposite to the algorithm proposed by Cabot [7], where the approximation scheme is only implicit thus making the method of Cabot [7] not easy to implement. Based on the proximal point algorithm, some researchers developed various proximal point algorithms to solve the problems under different types of framework, see [8] [9].

On the other hand, we can solve the bi-level problem (OP)-(P1) by a direct approach, called hybrid steepest descent method [10], where the sequence converges to the optimal solution according to n=1 θ n = and θ n 0( n ) . Then, the hybrid steepest descent method was further extended by Neto et al. for solving a more general outer objective function.

Recently, Beck et al. [11] proposed a new direct first order method, which is called Minimal Norm Gradient, it can solve problem (OP). The author proved that in terms of the inner objective function, the algorithm has O( 1/ n ) convergence rate result. However, for some choice of outer objective function ω , the computation of this method is so expensive to get the optimal solution. Motivated by the minimal norm gradient method, Sabach [12] suggested a first order method, called BiG-SAM, to solve the bilevel optimization problem, which is based on existing viscosity approximation methods [13]. According to the convergence analysis of the BiG-SAM, they get O( 1/n ) global convergence rate of in the light of the inner level function. In addition, Yekini et al. combined inertial technique with Big-SAM and proposed an inertial BiG-SAM algorithm, more details see in [14].

In this paper, we consider a more general composite convex function as the inner objective function of the bi-level optimization problem. It is,

min x n { H( x ):=f( x )+h( x )+g( x ) }, (P2)

where f is a continuously differentiable function with L f -Lipschitz continuous gradient, h is a real-valued and convex and g is an extended valued function. It is rich enough to cover many interesting generic optimization models by appropriate choices of ( f,h,g ) . For more details about the assumption of the functions we will give in the following Section 2. Let x op * is the unique optimal solution of problem (OP).

This paper is organized as follows. In Section 2, we use smooth technique partially smooth inner level problem (P2), construct the inner objective function (Q) and give some useful lemmas for the convergence rate analysis. In Section 3, we introduce a new BiG-SAM algorithm for solving the bi-level optimization problem with outer level (OP). In Section 4, we investigate the convergence rate of BiG-SAM for non-smooth version of bi-level optimization problem.

2. Motivation and Construction

In this section, we will present the motivation and the process of our algorithm design, as well as the useful lemma. Recall the bi-level optimization problem, where the outer level is problem (OP), the inner level is

min x n { H( x ):=f( x )+h( x )+g( x ) }, (P2)

where f , h and g are satisfy the following Assumption.

Assumption I:

i) f: n is a convex and continuous differentiable function, it has a Lipschitz continuous gradient with constant L f , i.e.,

f( x )f( y ) L f xy ,x,y n

ii) h: n is a ( α,β ) -smoothable function, ( α,β>0 ) . It is that for any μ>0 , h μ denotes a 1 μ -smooth approximation of h with parameters ( α,β ) .

iii) g: n ( , ] is a proper, lower semicontinuous and convex function.

iv) H has bounded level sets. Specifically, for any δ>0 , there exists R δ >0 such that

x R δ foranyxsatisfyingH( x )δ.

v) Let X P2 * be the optimal set of problem (P2), and it is nonempty. Set H opt as the optimal value of the problem (P2).

Definition 2.1. [15] A convex function h: n is called ( α,β ) -smoothable, ( α,β>0 ) if for any μ>0 , there exists a convex differentiable function h: n such that the following holds:

a) h μ ( x )h( x ) h μ ( x )+βμ for all x n .

b) h μ is α μ -smooth.

The function h μ is called a 1 μ -smooth approximation of h with parameters ( α,β ) .

According to the definition of h , combined with the Definition 2.1, we can smooth h as a 1/μ -smooth function h μ . Then, problem (P2) becomes into

min x n { H μ ( x ):= f( x )+ h μ ( x ) F μ ( x ) +g( x ) }.

For convenience, we write the above composite minimization problem as the following form,

min x n { F μ ( x )+g( x ) }. (Q)

Remark 2.1. Here, let X * be the optimal solution set of problem (Q), which is non-empty and X * X P2 * . When μ is small enough, the optimal solution set X * is equal to X P 2 * . This implies that when μ is small enough, the optimal solution of ω over X * is equivalent to the optimal solution of ω over X P2 * , i.e.,

min x X * ω( x )= min x X P2 * ω( x ).

Observe that problem (P2) is a non-smooth composite function, involving two non-smooth functions. A common methodology for solving non-smooth optimization problems is to replace the original problem by a sequence of approximating smooth problems, and then using direct and classical methods [16] to solve. The main idea is to transform the nondifferentiable problem into a smooth problem, there are many different smoothing approaches to various classes of non-smooth optimization problem, see in [17]-[19]. Motivated by the work of Beck et al. [15], we consider to partially smooth the inner objective function (P2) and transform it into a classical structure of convex optimization problem. The motivation of this approach is twofold. Firstly, according to the design and algorithmic analysis of the related schemes, it comes from the classical composite optimization problem formula, like (P1) where f is smooth and g is nonsmooth, can be solved by gradient-based algorithms, [20] [21]. Second, in many applications [22] [23], one of the non-smooth terms in (Q), plays a key role in describing a desirable property of the decision variable x . If we smooth all non-smooth functions in (P2), it will destroy the property of x .

Since h μ is α μ -smooth, it has α μ -Lipschitz continuous gradient h u . Due to F μ =f+ h μ and f is also have L f -Lipschitz continuous gradient, it implies the F μ is a continuous differentiable convex function, the Lipschitz constant of the gradient is equal to L f + α μ . Thus, problem (Q) can be solved by the classical proximal gradient (PG) method or proximal forward-backward method, the iteration is as follow:

x n+1 = prox λg ( x n λ F μ ( x n ) ),n, (2.1)

where the stepsize is λ=1/ ( L f + α μ ) . Since g is a proper, lower semicontinuous and convex function, prox λg is called Moreau Proximal Mapping, which is defined as follow:

prox λg ( x ):= argmin u n { g( u )+ 1 2λ ux 2 }. (2.2)

In addition, the PG method (1) can be regarded as a fixed-point algorithm, it can be formulated as

T λ ( x ):= prox λg ( xλ F μ ( x ) ), (2.3)

it is called the prox-grad mapping (proximal-gradient mapping). Denote Fix( T λ ):={ x n | T λ ( x )=x } , it is the fixed point set of T λ . From [24] and [22], we have the following two crucial properties.

Lemma 2.1. [12]

i) The prox-grad mapping T λ is nonexpansive for all λ( 0,1/ ( L f +α/μ ) ] , that is,

T λ ( x ) T λ ( y ) xy ,x,y n (2.4)

ii) Fixed points of the prox-grad mapping T λ are optimal solutions of problem (Q) and the reverse is also true, i.e.,

x X * x= T λ ( x )= prox λg ( xλ F μ ( x ) ) (2.5)

Therefore, we have that Fix( T λ )= X * for all λ>0 .

Now, we give a key proposition, which is a significant result in convergence rate analysis. Indeed, we consider the following quadratic approximation of H μ ( x ):=f( x )+ h μ ( x )+g( x ) at y , it is:

Q λ ( x,y ):=f( y )+ h μ ( y )+ xy,f( y )+ h μ ( y ) + 1 2λ xy 2 +g( x ),

which admits a unique minimizer

p λ ( y ):=argmin{ Q λ ( x,y ):x n }.

It implies that we have,

p λ ( y )= argmin x n { g( x )+ 1 2λ x ( yλf( y )λ h μ ( y ) ) 2 }.

From the characterize of the optimality of p λ ( ) , we have the following lemma.

Lemma 2.2. For any y n , one has z= p λ ( y ) if and only if there exists γ( y )g( z ) , the subdifferential of g( ) , such that

f( y )+ h μ ( y )+ 1 λ ( zy )+γ( y )=0. (2.6)

Then we have the following proposition.

Proposition 2.1. Suppose that Assumption I holds true. Let y n and denote p λ ( y )= T λ ( y ) , such that

H μ ( p λ ( y ) ) Q λ ( p λ ( y ),y ). (2.7)

Then, for any λ1/ ( L f +α/μ ) and x n , we have

H μ ( x ) H μ ( p λ ( y ) ) 1 2λ p λ ( y )y 2 + 1 λ yx, p λ ( y )y . (2.8)

Proof. From (2.7), we have,

H μ ( x ) H μ ( p λ ( y ) ) H μ ( x ) Q λ ( p λ ( y ),y ). (2.9)

Since f,h , and g are convex, it implies

f( x )f( y )+ xy,f( y ) ,

h μ ( x ) h μ ( y )+ xy, h μ ( y ) ,

g( x )g( p λ ( y ) )+ x p λ ( y ),γ( y ) ,

where the γ( y ) is defined from lemma 2.2. Now, Summing the above inequalities together, we have

H μ ( x )f( y )+ h μ ( y )+ xy,f( y )+ h μ ( y ) +g( p λ ( y ) ) + x p λ ( y ),γ( y ) . (2.10)

On the other hand, from the definition of Q λ ( x,y ) , let x:= p λ ( y ) , we have

Q λ ( p λ ( y ),y )=f( y )+ h μ ( y )+ p λ ( y )y,f( y )+ h μ ( y ) + 1 2λ p λ ( y )y 2 +g( p λ ( y ) ). (2.11)

Now, combine (9) with (10) and (11), it follows that

H μ ( x ) H μ ( p λ ( y ) ) H μ ( x ) Q λ ( p λ ( y ),y ) x p λ ( y ),f( y )+ h μ ( y )+γ( y ) 1 2λ p λ ( y )y 2 = 1 2λ p λ ( y )y 2 + x p λ ( y ), 1 λ ( p λ ( y )y ) = 1 2λ p λ ( y )y 2 + 1 λ p λ ( y )y+yx, p λ ( y )y = 1 2λ p λ ( y )y 2 + 1 λ p λ ( y )y 2 + 1 λ yx, p λ ( y )y = 1 2λ p λ ( y )y 2 + 1 λ yx, p λ ( y )y ,

where the first equality is getting from we used (6). Thus, we complete the proof.

Now, we turn to discuss the details of outer level problem (OP). Recall the formulation of (OP), it is a convex constraint optimization, where X * is the optimal solution set of problem (Q). In general, we suppose that outer objective function (OP) satisfies the following assumptions.

Assumption II.

i) ω: n is σ -strongly convex, σ>0 .

ii) ω is a continuously differentiable function such that ω is Lipschitz continuous with constant L ω >0 .

Due to ω is differential, we can use the gradient descent method solving the outer level problem (OP). Nevertheless, not all the outer function ω satisfies Assumption II (ii), that is, ω is nonsmooth. So, we assume ω satisfies the following property.

Assumption III: ω: n is strong convex with parameter σ>0 and ω -Lipschitz continuous.

In this case, we can depend on the Moreau envelop of ω and solve the outer level problem, which is denoted by M sω , the formula is as follow:

M sω ( x )= min u n { ω( u )+ 1 2s ux 2 }. (2.12)

It is well-known that M sω is continuously differentiable on n with an 1/s -Lipschitz continuous gradient, which is given by

M sω ( x )= 1 s ( x prox sω ( x ) ). (2.13)

Additionaly, the Moreau envelope has another useful property, that is:

Lemma 2.3. [12] Let ω: n ( , ] be a strongly convex function with strong convexity parameter σ and let s>0 . Then, the Moreau envelope M sω is strongly convex with parameter σ/ ( 1+sσ ) .

Definition 2.2. [12] A mapping S: n n is said to be η -contraction if there exists η<1 such that

S( x )S( y ) η xy ,x,y n .

When ω satisfies Assumption II, the following result is crucial for our derivations.

Lemma 2.4. [12] Suppose that Assumption II holds and let I is a identity operator. Then, the mapping S s =Isω is a contractive operator, for all s2/ ( L ω +σ ) , that is,

xsω( x )( ysω( y ) ) 1 2sσ L ω σ+ L ω xy ,x,y n . (2.14)

3. BiG-SAM Algorithm for Smooth Bi-Level Optimization

In this section, we will introduce a new BiG-SAM algorithm for solving bi-level optimization. Firstly, we similarly construct a general framwork for the bi-level problem, consisting of inner problem (Q).

3.1. The General Framework

Motivated by Sabach et al. [12], our approach is also to use the Sequential Averaging Method (SAM), in which we can handle the fixed point problem, proposed in [13]. Right now, we will analyse how to use it for solving the bi-level optimization problems, which is made up of problem (OP) and (Q). The sequence { x n } generated by SAM algorithm, converges to a solution of the fixed-point problem [13]. The iteration is

x n = α n S( x n1 )+( 1 α n )T( x n1 ),

where { α n } n is a carefully chosen sequence in ( 0,1 ] .

The above algorithm, designed in [13], is to find a fixed-point of a nonexpansive operator T , i.e. x * Fix( T ) . This point also satisfies a variational inequality:

x * S( x * ),x x * 0,xFix( T ), (3.1)

where S is a contraction mapping. Here, it means that x * is the “better” fixed-point in Fix( T ) . Where { α n } satisfies the following assumption.

Assumption IV. Let { α n } n be a sequence of real numbers in ( 0,1 ] which satisfies lim n α n =0 , n=1 α n = and lim n α n+1 / α n =1 .

It should be noted that Assumption IV holds true for several choices of sequences { α n } n which include, for example, α n =α/n ,n for any choice of α( 0,1 ] .

The following lemma summarizes the key results on SAM, as established in ([13], Theorem 3.2), which serve as the foundation for this paper.

Lemma 3.1. [12] Assume that S: n n is a η -contraction and that T: n n is nonexpansive mapping, for which Fix( T ) . Let { x n } n be the sequence generated by SAM. If Assumption IV holds true, then the following assertions are valid.

i) The sequence { x n } n is bounded, in particular, for any x ˜ Fix( T ) we have, for all n , that

x n x ˜ C x ˜ :=max{ x 0 x ˜ , 1 1η ( IS ) x ˜ }. (3.2)

Moreover, for all n , we also have that

T( x n ) x ˜ C x ˜ and S( x n )S( x ˜ ) η C x ˜ .

ii) The sequence { x n } n converges to some x * Fix( T ) .

iii) The limit point x * of { x n } n , which the existence is ensured by (ii), satisfies the following variational inequality

x * S( x * ),x x * 0,xFix( T ). (3.3)

3.2. SAM for Smooth Bi-Level Optimization Problem

From the Section 2, we know that the inner level optimization problem (P2) can be smoothed as problem (Q), it has a same structure of the inner level optimization problem in [12]. Inspired by the works of [12], we can match the outer problem (OP) and the inner problem (Q) with mapping S and T , respectively. Here, we know that,

i) The mapping T and its fixed-point set Fix( T ) are related to problem (Q) with the composite function H μ = F μ +g and the optimal solution set X * .

ii) The mapping S is related to problem (OP) and the outer objective function ω .

Thus, we set T as the prox-grad mapping defined in (2.3), that is, for some λ( 0,1/ ( L f +α/μ ) ] we have

T( x ):= T λ ( x )= prox λg ( xλ F μ ( x ) ). (3.4)

According to Lemma 2.1 and based on the Assumption I, it implies that T is nonexpansive and Fix( T )= X * . Then, from Lemma 2.4 and Assumption II, we can construct the η -contraction mapping S as follow:

S( x ):=xsω( x ), (3.5)

where s( 0,2/ ( σ+ L ω ) ] , and the contraction parameter is η= ( 1 2s L ω σ/ ( L ω +σ ) ) 1/2 .

Similarly, we use the Sequential Averaging Method (SAM) to design a new BiG-SAM algorithm for solving the bi-level optimization problems(Q) and (OP). The iteration is as follow.

New Bi-level Gradient SAM (BiG-SAM)

Input: λ( 0,1/ ( L f +α/μ ) ],s( 0,2/ ( L ω +σ ) ] , and { α n } n satisfying Assumption IV.

Initialization: x 0 n .

General Step ( n=1,2, ):

y n = prox λg ( x n1 λ F μ ( x n1 ) ), (3.6)

z n = x n1 sω( x n1 ), (3.7)

x n = α n z n +( 1 α n ) y n . (3.8)

Due to the new BiG-SAM algorithm is similar to the works in [12], we can get the similar convergence result.

Lemma 3.2. Let { x n } n be a sequence generated by the new BiG-SAM. Suppose that Assumptions I, II and IV hold true. Then, the sequence { x n } n converges to x * X * which satisfies

ω( x * ),x x * 0,x X * , (3.9)

and therefore, x * = x op * is the optimal solution of problem (OP).

The proof of lemma 3.2 is similar to the proof Proposition 5 in [12].

3.3. The Global Convergence Rate of BiG-SAM

In this section, we first prove a technical result on the convergence rate of the gap between successive SAM iterations for fixed-point problems, as described in Section 3.1. Then, we use this to derive our main result: a convergence rate for BiG-SAM in terms of the values of the inner objective function.

We first present a technical lemma which will assist us in the rate of convergence proof.

Lemma 3.3. [12] Let M>0 . Assume that { a n } n is a sequence of nonnegative real numbers which satisfy a 1 M and

a n+1 ( 1γ b n+1 ) a n +( b n b n+1 ) c n ,n1,

where γ( 0,1 ] , { b n } n is a sequence which is defined by b n =min{ 2/ ( γn ) ,1 } , and { c n } n is a sequence of real numbers such that c n M< . Then, the sequence { a n } n satisfies

a n MJ γn ,n1,

where J= 2/γ .

For simplicity, we denote y n =T( x n1 ) and z n =S( x n1 ) for any n . The convergence analysis rate is divided into two parts, which ultimately lead to the main conclusions of Theorem 3.1 and Theorem 3.2. Lemma 3.4 provides useful inequalities, while Proposition 3.1 shows that by choosing an appropriate sequence { α n } n , the distance between successive elements of { x n } n is bounded by O( 1/n ) , and the sequence converges to a fixed-point of T at the same rate.

Lemma 3.4. [12] Assume that S: n n is a η -contraction and that T: n n is nonexpansive mapping, for which Fix( T ) . Let { x n } n , { y n } n and { z n } n be sequences generated by SAM. Then, for any n1 and any x ˜ Fix( T ) , defining z ˜ =S( x ˜ ) the following inequalities hold true

y n+1 y n x n x n1 , (3.10)

z n+1 z n η x n x n1 , (3.11)

y n x ˜ x n1 x ˜ , (3.12)

z n z ˜ η x n1 x ˜ . (3.13)

Now, we prove the convergence rate of the sequence { x n x n1 } n , where { x n } n is generated by SAM and the averaging parameters α n ,n , are chosen as follows.

α n =min{ 2γ n( 1η ) ,1 },n1, (3.14)

where γ( 0,1 ] . For simplicity, we prove our results under the assumption that γ=1 . It is important to note that all the following results remain valid for any γ chosen from the interval ( 0,1 ] .

Proposition 3.1. Let { x n } n , { y n } n and { z n } n be sequences generated by SAM where { α n } n is defined by (3.14). Then, for any x ˜ Fix( T ) , the two sequences { x n x n1 } n and { y n x n1 } n converge to 0 , and the rates of convergence are given by

x n x n1 2 C x ˜ J ( 1η )n ,n1, (3.15)

and

y n x n1 2 C x ˜ ( J+2 ) ( 1η )n ,n1, (3.16)

where C x ˜ is defined in (3.2), and J= 2/ ( 1η ) .

Proof. From the definitions of x n and x n+1 , we directly obtain:

x n+1 x n = ( 1 α n+1 ) y n+1 + α n+1 z n+1 ( ( 1 α n ) y n + α n z n ) = ( 1 α n+1 )( y n+1 y n )+ α n+1 ( z n+1 z n )+( α n α n+1 )( y n z n ) ( 1 α n+1 ) y n+1 y n + α n+1 z n+1 z n +( α n α n+1 ) y n z n ( 1 α n+1 ) x n x n1 + α n+1 η x n x n1 +( α n α n+1 ) y n z n =( 1 α n+1 ( 1η ) ) x n x n1 +( α n α n+1 ) y n z n , (3.17)

where the second inequality follows from (3.10) and (3.11). Now, let x ˜ Fix( T ) and let z ˜ =S( x ˜ ) , then

y n z n = y n x ˜ + x ˜ z ˜ + z ˜ z n y n x ˜ + x ˜ z ˜ + z ˜ z n x n1 x ˜ + ( IS ) x ˜ +η x n1 x ˜ C x ˜ +( 1η ) C x ˜ +η C x ˜ =2 C x ˜ , (3.18)

where the second inequality follows from (3.12) and (3.13), as well as the definition of z ˜ , and the last inequality follows from Lemma 3.1(i). Additionally, we have

x 1 x 0 = x 1 x ˜ + x ˜ x 0 x 1 x ˜ + x 0 x ˜ 2 C x ˜ , (3.19)

where the second inequality follows from Lemma 3.1(i). Let a n := x n x n1 , b n := α n ,γ:=1η and c n := y n z n . Then, it means that (3.17) is equal to

a n+1 = x n+1 x n ( 1 α n+1 ( 1η ) ) x n x n1 +( α n α n+1 ) y n z n =( 1 b n+1 γ ) a n +( b n b n+1 ) c n .

Note that, c n := y n z n and combine it with (3.18), we know that c n 2 C x ˜ . Now, set M:=2 C x ˜ . According to Lemma 3.3, we know that

a n MJ γn = 2 C x ˜ J ( 1η )n ,

it means that we have (3.15). Then the convergence rate for { y n x n1 } n is derived from the following arguments. Recall that x n =α z n +( 1α ) y n , then

y n x n1 = y n x n + x n x n1 y n x n + x n x n1 = α n y n z n + x n x n1 2 ( 1η )n 2 C x ˜ + 2 C x ˜ J ( 1η )n = 2 C x ˜ ( J+2 ) ( 1η )n ,

where the second inequality is due to the previous result as well as (3.18).

It is no hard to see from (3.16) that the sequence generated by BiG-SAM algorithm converges to an optimal solution of the inner problem (Q) with O( 1/n ) . In the following, we will discuss the convergence an important result that is the convergence of { H μ ( y n ) } n . Why not discuss the convergence of { H μ ( x n ) } n directly? Because of the domain of the function H μ may not be feasible for { H μ ( x n ) } . However, due to y n x n1 0 as n and H μ is lower semicontinuous, we know that proving convergence of { H μ ( y n ) } n to the optimal value also means the convergence of { H μ ( x n ) } n to the same value. The global convergence rate result is as follow.

Theorem 3.1. Let { x n } n , { y n } n and { z n } n be sequences generated by BiG-SAM. Let { α n } n be defined by (3.14). Then, for all λ1/ ( L f +α/μ ) and n , we have that

H μ ( y n ) H μ ( x op * ) 2 C x op * 2 ( J+2 ) ( n+1 )( 1η )λ ,

where C x op * =max{ x 0 x ˜ , 1 1η ( IS ) x ˜ } , and J= 2/ ( 1η ) .

Proof. From Proposition 2.1 we have, for any step-size λ1/ ( L f +α/μ ) , that the following inequality that holds true,

H μ ( y n+1 ) H μ ( x op * ) 1 λ x n y n+1 , x n x op * 1 2λ x n y n+1 2 . (3.20)

For x op * X * =Fix( T λ ) , from Lemma 3.1(i) and Lemma 3.1, we obtain

x n y n+1 , x n x op * x n y n+1 x n x op * 2 C x op * 2 ( J+2 ) ( 1η )( n+1 ) . (3.21)

Substituting (3.21) into (3.20), we get

H μ ( y n+1 ) H μ ( x op * ) 2 C x op * 2 ( J+2 ) ( n+1 )( 1η )λ . (3.22)

We complete the proof.

Now, we will show the complexity result of BiG-SAM algorithm.

Theorem 3.2. Suppose that Assumption I holds. Let ε( 0, ε ¯ ) for some fixed ε ¯ >0 . Let { x n } n , { y n } n and { z n } n be generated by BiG-SAM algorithm with smoothing parameter

μ= α β ε αβ + αβ+ L f ε .

Then for any n satisfying

n 2 Γ 2 ( J+2 ) ( 2 αβ + L f ε ) 2 ε 2 ( 1η ) ,

where Γ=max{ R δ x 0 , 1 1η ( IS ) R δ } , and J= 2/ ( 1η ) , it holds that H( y n ) H opt ε .

Proof. Using the 1 μ -smooth approximation property of h μ with parameters ( α,β ) , it follows that for any y n ,

H μ ( y )H( y ) H μ ( y )+βμ. (3.23)

In particular, the following two inequalities hold:

H opt H μ ( x op * )andH( y n ) H μ ( y n )+βμ,n=0,1,. (3.24)

In which, combined with (3.22) and λ=1/ ( L f +α/μ ) , it yields

H( y n ) H opt H μ ( y n )+βμ H μ ( x op * ) 2 C x op * 2 ( J+2 ) ( n+1 )( 1η )λ +βμ =2 L f C x op * 2 ( J+2 ) ( n+1 )( 1η ) +( 2α C x op * 2 ( J+2 ) ( n+1 )( 1η ) ) 1 μ +βμ 2 L f C x op * 2 ( J+2 ) n( 1η ) +( 2α C x op * 2 ( J+2 ) n( 1η ) ) 1 μ +βμ,

where C x op * :=max{ x 0 x ˜ , 1 1η ( IS ) x ˜ } , and J= 2/ ( 1η ) . Therefore, for a given N>0 , it holds that for any nN ,

H( y n ) H opt 2 L f C x op * 2 ( J+2 ) N( 1η ) +( 2α C x op * 2 ( J+2 ) N( 1η ) ) 1 μ +βμ. (3.25)

Minimizing the right-hand side w.r.t. μ , we obtain

μ= 2α C x op * 2 ( J+2 ) Nβ( 1η ) . (3.26)

Plugging (3.26) into (3.25), it implies that for any nN ,

H( y n ) H opt 2 L f C x op * 2 ( J+2 ) N( 1η ) +2 2αβ C x op * 2 ( J+2 ) N( 1η ) .

Thus, to make sure that y n is an ε -optimal solution for any nN , it is enough that N will satisfy

2 L f C x op * 2 ( J+2 ) N( 1η ) +2 2αβ C x op * 2 ( J+2 ) N( 1η ) ε.

Setting τ 2 = 2 C x op * 2 ( J+2 ) N( 1η ) , the above inequality reduces to

L f τ 2 +2 αβ τε0,

which, by the fact that τ>0 , is equivalent to

2 C x op * 2 ( J+2 ) N( 1η ) =τ αβ + αβ+ L f ε L f = ε αβ + αβ+ L f ε .

We conclude that N should satisfy

N ( 2 C x op * 2 ( J+2 ) ) ( αβ + αβ+ L f ε ) 2 ε 2 ( 1η ) .

In particular, if we choose

N= N 1 ( 2 C x op * 2 ( J+2 ) ) ( αβ + αβ+ L f ε ) 2 ε 2 ( 1η ) ,

and μ according to (3.26), meaning that

μ= 2α C x op * 2 ( J+2 ) N 1 β( 1η ) = α β ε αβ + αβ+ L f ε ,

then for any n N 1 , it holds that H( y n ) H opt ε . By (3.23) and (3.24),

H( x op * )βμ H μ ( x op * ) H opt H( y 0 ),

which along with the inequality

μ= α β ε αβ + αβ+ L f ε α β ε αβ + αβ ε ¯ 2β ,

implies that H( x op * )H( y 0 )+ ε ¯ 2 , and hence, by Assumption I (iv), it follows that x ˜ R δ , where δ:=H( y 0 )+ ϵ ¯ 2 . Therefore, C x op * max{ R δ x 0 , 1 1η ( IS ) R δ }=Γ . Consequently,

N 1 = ( 2 C x op * 2 ( J+2 ) ) ( αβ + αβ+ L f ε ) 2 ε 2 ( 1η ) ( 2 C x op * 2 ( J+2 ) ) ( 2 αβ + L f ε ) 2 ε 2 ( 1η ) 2 Γ 2 ( J+2 ) ( 2 αβ + L f ε ) 2 ε 2 ( 1η ) N 2 .

The second inequality follows from the fact that for any γ,δ0 , it holds that γ+δ γ + δ . Consequently, for any n N 2 , we have H( y n ) H opt ε , thus establishing the desired result.

4. BiG-SAM for Nonsmooth Bi-level Optimization Problems

In this section, we adopt the problem (OP) described in Section 2, where the objective function ω does not necessarily satisfy Assumption II, which satisfies the Assumption III.

Note that, BiG-SAM cannot be directly applied to bi-level problems with Assumption III. However, we can handle this case indirectly. From the strong convexity of ω , we can smooth ω by the Moreau envelope M sω . Recall the properties of Moreau envelope in Section 2, M sω is continuously differentiable, with a 1/s -Lipschitz continuous gradient, 1/s >0 , and is strongly convex (see Lemma 2.3). Thus, M sω satisfies Assumption II, it maked BiG-SAM algorithm applicable. In this case, step (3.7) can be simplified as follow:

z n = x n1 s M sω ( x n1 ) = x n1 s 1 s ( x n1 prox sω ( x n1 ) ) = prox sω ( x n1 ), (4.1)

where the second equality follows from (2.13). This implies that computing z n ( n ) requires evaluating the proximal mapping of ω .

Remark 1. Note that the proximal mapping of a strongly convex function is a contraction ([12], Lemma 6), making the theory in Section 3.1 applicable here. A direct consequence of Lemma 3.2 applies to the mappings:

S( x )=xs M sω ( x )andT( x )= prox λg ( xλ F μ ( x ) ),

where s>0 and λ( 0,1/ ( L f +α/μ ) ] .

Lemma 4.1. [12] Let { x n } n be a sequence generated by BiG-SAM. Under Assumptions I, III and IV, for s>0 , the sequence { x n } n converges to x s * X * , where x s * satisfies:

M sω ( x s * ),x x s * 0,x X * . (4.2)

Thus, x s * is the optimal solution of the problem (OP) with respect to the Moreau envelope M sω , i.e.,

x s * = argmin x X * M sω ( x ),

where X * is the set of optimal solutions of problem (Q).

Smoothing the ω seems to not affect the convergence rate, which is based on the inner function. From the works in [12], we know that the convergence rate depends on the contraction parameter η . We have the following result from ([12], Lemma 6),

η= 1 1+sσ .

Let δ>0 be the required uniform accuracy in terms of the outer objective function, that is,

ω( x n ) M sω ( x n )δ,n (4.3)

where it should be noted that ω( x n ) M sω ( x n )0 for all n . Now, we aim to determine the number of iterations N required to achieve an ε -optimal solution for the inner problem, that is,

H( y N ) H opt ε,

while keeping the uniform accuracy as given in (4.3). This means that N depends on both ε and δ .

Proposition 4.1. Let ε( 0, ε ¯ ) for some fixed ε ¯ >0 . Let { x n } n and { y n } n be a sequence generated by BiG-SAM and suppose that Assumptions I, III and IV hold true. In addition, suppose that the smoothing parameter is chosen by

s= 2δ ω 2

and

μ= α β ε αβ + αβ+ L f ε .

Then, (4.3) holds true and for

n 2 Γ 2 ( 2 αβ + L f ε ) 2 ε 2 ( 2+ 3 ω 2 2σδ + ω 4 4 σ 2 δ 2 ),

where Γ=max{ R δ x 0 , 1 1η ( IS ) R δ } , it holds that H( y n ) H opt ε .

Proof. Since ω is ω -Lipschitz continuous (see Assumption III) it follows that the norms of the subgradients of ω are bounded from above by ω . Thus, from ([15], Lemma 4.2) it follows, for all x n , that

ω( x ) s ω 2 2 M sω ( x )ω( x )

Therefore, for s= 2δ/ ω 2 , we obtain that

ω( x n ) M sω ( x n )δ,n.

Using the 1 μ -smooth approximation property of h μ with parameters ( α,β ) , it follows that for any y n ,

H μ ( y )H( y ) H μ ( y )+βμ. (4.4)

In particular, the following two inequalities hold:

H opt H μ ( x op * )andH( y n ) H μ ( y n )+βμ,n=0,1,, (4.5)

which, combined with (3.22), yields

H( y n ) H opt H μ ( y n )+βμ H μ ( x op * ) 2 C x op * 2 ( J+2 ) ( n+1 )( 1η )λ +βμ =2 L f C x op * 2 ( J+2 ) ( n+1 )( 1η ) +( 2α C x op * 2 ( J+2 ) ( n+1 )( 1η ) ) 1 μ +βμ 2 L f C x op * 2 ( J+2 ) n( 1η ) +( 2α C x op * 2 ( J+2 ) n( 1η ) ) 1 μ +βμ,

where C x op * :=max{ x 0 x ˜ , 1 1η ( IS ) x ˜ } , and J= 2/ ( 1η ) . Therefore, for a given N >0 , it holds that for any n N ,

H( y n ) H opt 2 L f C x op * 2 ( J+2 ) N ( 1η ) +( 2α C x op * 2 ( J+2 ) N ( 1η ) ) 1 μ +βμ. (4.6)

Minimizing the right-hand side w.r.t. μ , we obtain

μ= 2α C x op * 2 ( J+2 ) N β( 1η ) . (4.7)

Plugging the above expression into (4.6), we conclude that for any n N ,

H( y n ) H opt 2 L f C x op * 2 ( J+2 ) N ( 1η ) +2 2αβ C x op * 2 ( J+2 ) N ( 1η ) .

Thus, to guarantee that y n is an ε -optimal solution for any n N , it is enough that N will satisfy

2 L f C x op * 2 ( J+2 ) N ( 1η ) +2 2αβ C x op * 2 ( J+2 ) N ( 1η ) ε.

Denoting τ 2 = 2 C x op * 2 ( J+2 ) N ( 1η ) , the above inequality reduces to

L f τ 2 +2 αβ τε0,

which, by the fact that τ>0 , is equivalent to

2 C x op * 2 ( J+2 ) N ( 1η ) =τ αβ + αβ+ L f ε L f = ε αβ + αβ+ L f ε .

We conclude that N should satisfy

N ( 2 C x op * 2 ( J+2 ) ) ( αβ + αβ+ L f ε ) 2 ε 2 ( 1η ) .

In particular, if we choose

N = N 3 ( 2 C x op * 2 ( J+2 ) ) ( αβ + αβ+ L f ε ) 2 ε 2 ( 1η ) (4.8)

Now, substituting J= 2/ ( 1η ) , η=1/ ( 1+sσ ) , and s= 2δ/ ω 2 into equation (4.8), we obtain

N = N 3 ( 2 C x op * 2 ( J+2 ) ) ( αβ + αβ+ L f ε ) 2 ε 2 ( 1η ) = ( 2 C x op * 2 ( 2 1η +2 ) ) ( αβ + αβ+ L f ε ) 2 ε 2 ( 1η ) = 4 C x op * 2 ( αβ + αβ+ L f ε ) 2 ( 2η ) ε 2 ( 1η ) 2 = 4 C x op * 2 ( αβ + αβ+ L f ε ) 2 ε ( 2+ 3 sσ + 1 ( sσ ) 2 ) = 4 C x op * 2 ( αβ + αβ+ L f ε ) 2 ε 2 ( 2+ 3 ω 2 2σδ + ω 4 4 σ 2 δ 2 )

and μ according to (4.7), meaning that

μ= 2α C x op * 2 ( J+2 ) N 3 β( 1η ) = α β ε αβ + αβ+ L f ε ,

then for any n N 3 it holds that H( y n ) H opt ε . By (4.4) and (4.5),

H( x op * )βμ H μ ( x op * ) H opt H( y 0 ),

which along with the inequality

μ= α β ε αβ + αβ+ L f ε α β ε αβ + αβ ε ¯ 2β ,

implies that H( x op * )H( y 0 )+ ε ¯ 2 , and hence, by Assumption I (iv), it follows that x ˜ R δ , where δ:=H( y 0 )+ ϵ ¯ 2 . Therefore, C x op * max{ R δ x 0 , 1 1η ( IS ) R δ }=Γ . Consequently,

N 3 = 4 C x op * 2 ( αβ + αβ+ L f ε ) 2 ε 2 ( 2+ 3 ω 2 2σδ + ω 4 4 σ 2 δ 2 ) 4 C x op * 2 ( 2 αβ + L f ε ) 2 ε 2 ( 2+ 3 ω 2 2σδ + ω 4 4 σ 2 δ 2 ) 2 Γ 2 ( 2 αβ + L f ε ) 2 ε 2 ( 2+ 3 ω 2 2σδ + ω 4 4 σ 2 δ 2 ) N 4 .

The second inequality follows from the fact that for any γ,δ0 , it holds that γ+δ γ + δ . The desired result is achieved by selecting n as the upper bound derived above.

5. Conclusion

In this paper, we construct a novel bi-level gradient sequential averaging method (BiG-SAM) for solving a more composite convex bi-level optimization problem, where the inner level problem is to find the optimal solution of the sum of three functions, including two non-smooth function and one smoothable. We analyze the convergence rate of the BiG-SAM in two different cases, where the outer objective is smooth or non-smooth, the global convergence rate with respected to inner objective function is O( 1/n ) . In the future, we could further explore the convergence rate and complexity analysis of the outer objective function. Additionally, we could design stochastic and parallel variants of BiG-SAM for high-dimensional data or distributed scenarios. This would help reduce computational complexity while ensuring convergence and scalability of both the outer and inner objectives in distributed environments.

NOTES

*First author.

#Corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Dempe, S., Dinh, N. and Dutta, J. (2010) Optimality Conditions for a Simple Convex Bilevel Programming Problem. In: Burachik, R. and Yao, J.C., Eds., Variational Analysis and Generalized Differentiation in Optimization and Control, Springer, 149-161.
https://doi.org/10.1007/978-1-4419-0437-9_7
[2] Dempe, S. (2002) Foundations of Bilevel Programming. Springer Science & Business Media.
[3] Tikhonov, A.N. and Arsenin, V.I. (1977) Solutions of Ill-Posed Problems. Winston & Sons.
[4] Mangasarian, O.L. and Meyer, R.R. (1979) Nonlinear Perturbation of Linear Programs. SIAM Journal on Control and Optimization, 17, 745-752.
https://doi.org/10.1137/0317052
[5] Ferris, M.C. and Mangasarian, O.L. (1991) Finite Perturbation of Convex Programs. Applied Mathematics & Optimization, 23, 263-273.
https://doi.org/10.1007/bf01442401
[6] Solodov, M. (2007) An Explicit Descent Method for Bilevel Convex Optimization. Journal of Convex Analysis, 14, 227-237.
[7] Cabot, A. (2005) Proximal Point Algorithm Controlled by a Slowly Vanishing Term: Applications to Hierarchical Minimization. SIAM Journal on Optimization, 15, 555-572.
https://doi.org/10.1137/s105262340343467x
[8] Boţ, R.I. and Nguyen, D. (2018) A Forward-Backward Penalty Scheme with Inertial Effects for Monotone Inclusions. Applications to Convex Bilevel Programming. Optimization, 68, 1855-1880.
https://doi.org/10.1080/02331934.2018.1556662
[9] Malitsky, Y. (2017). Chambolle-Pock and Tseng’s Methods: Relationship and Extension to the Bilevel Optimization.
https://optimization-online.org/2017/06/6103/
[10] Yamada, I., Yukawa, M. and Yamagishi, M. (2011) Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D. and Wolkowicz, H., Eds., Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer, 345-390.
https://doi.org/10.1007/978-1-4419-9569-8_17
[11] Beck, A. and Sabach, S. (2013) A First Order Method for Finding Minimal Norm-Like Solutions of Convex Optimization Problems. Mathematical Programming, 147, 25-46.
https://doi.org/10.1007/s10107-013-0708-2
[12] Sabach, S. and Shtern, S. (2017) A First Order Method for Solving Convex Bilevel Optimization Problems. SIAM Journal on Optimization, 27, 640-660.
https://doi.org/10.1137/16m105592x
[13] Xu, H. (2004) Viscosity Approximation Methods for Nonexpansive Mappings. Journal of Mathematical Analysis and Applications, 298, 279-291.
https://doi.org/10.1016/j.jmaa.2004.04.059
[14] Shehu, Y., Vuong, P.T. and Zemkoho, A. (2019) An Inertial Extrapolation Method for Convex Simple Bilevel Optimization. Optimization Methods and Software, 36, 1-19.
https://doi.org/10.1080/10556788.2019.1619729
[15] Beck, A. and Teboulle, M. (2012) Smoothing and First Order Methods: A Unified Framework. SIAM Journal on Optimization, 22, 557-580.
https://doi.org/10.1137/100818327
[16] Shor, N.Z. (1985) Minimization Methods for Nondifferentiable Functions. Springer-Verlag.
[17] Ben-Tal, A. and Teboulle, M. (1989) A Smoothing Technique for Nondifferentiable Optimization Problems. In: Dolecki, S., Ed., Optimization, Springer, 1-11.
https://doi.org/10.1007/bfb0083582
[18] Bertsekas, D.P. (1975) Nondifferentiable Optimization via Approximation. In: Balinski, M.L. and Wolfe, P., Eds., Nondifferentiable Optimization, Springer, 1-25.
https://doi.org/10.1007/bfb0120696
[19] Bertsekas, D.P. (1982) Constrained Optimization and Lagrange Multiplier Methods. Academic Press.
[20] Beck, A. and Teboulle, M. (2009) A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2, 183-202.
https://doi.org/10.1137/080716542
[21] Nesterov, Y. (2012) Gradient Methods for Minimizing Composite Functions. Mathematical Programming, 140, 125-161.
https://doi.org/10.1007/s10107-012-0629-5
[22] Beck, A. and Teboulle, M. (2010) Gradient-Based Algorithms with Applications to Signal-Recovery Problems. Journal of Convex Analysis, 17, 445-477.
[23] Combettes, P.L. and Pesquet, J. (2011) Proximal Splitting Methods in Signal Processing. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D. and Wolkowicz, H., Ed., Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer, 185-212.
https://doi.org/10.1007/978-1-4419-9569-8_10
[24] Bauschke, H.H. and Combettes, P.L. (2019) Correction To: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. In: Bauschke, H.H. and Combettes, P.L., Eds., Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer, C1-C4.
https://doi.org/10.1007/978-3-319-48311-5_31

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.