TITLE:
Research on the Proximal Gradient Method for Composite Optimization Problems under Generalized Smoothness Assumptions
AUTHORS:
Lin Yang, Na Xian
KEYWORDS:
Generalized Smoothness, Proximal Gradient Method, Composite Optimization Problem
JOURNAL NAME:
Journal of Applied Mathematics and Physics,
Vol.14 No.4,
April
23,
2026
ABSTRACT: The proximal gradient method (PGD) is an important approach for solving composite optimization problems consisting of the sum of a smooth function and a nonsmooth function. Classical convergence analysis of PGD typically assumes that the smooth function has a globally Lipschitz continuous gradient. In recent years, researchers have relaxed this assumption from various perspectives, thereby providing theoretical support for the application of PGD to more general problems. In particular, for unconstrained smooth optimization problems, Li et al. introduced the concept of
ℓ(
⋅
)
-smoothness, studied the convergence rates of classical gradient methods, and showed that under this generalized smoothness condition, the convergence rates of classical gradient methods remain consistent with those under the classical smoothness condition. Nevertheless, existing results are mostly focused on unconstrained smooth optimization, and the corresponding theoretical analysis of PGD for composite optimization problems still requires further development. To this end, this paper further investigates the convergence rates of PGD for solving composite optimization problems within the
ℓ(
⋅
)
-smoothness framework. Under the assumption that the smooth component in the composite optimization problem is convex, we prove that the sequence of function values generated by the constant-stepsize PGD under-smoothness achieves a convergence rate of
O(
1/k
)
.