secretaire-inma@uclouvain.be +32 10 47 80 36
2025 • Journal Article

Exact convergence rate of the last iterate in subgradient methods

Home > Publications > Exact convergence rate of the last iterate in subg...

Exact convergence rate of the last iterate in subgradient methods

Authors:
Zamani, Moslem, Glineur, François
Published in:
SIAM Journal on Optimization

Volume: 35 • Number: 3 • Pages: 2182-2201

We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients. Using a novel proof technique that tracks distances to varying reference points, we derive tight worst-case convergence rates for the (projected) subgradient method with constant step sizes or step lengths. We identify the optimal constant step size for a given number of iterations N, yielding a last-iterate accuracy smaller than BR / sqrt{N+1} * sqrt{1+ log(N)/4}, where B bounds subgradient norms and R bounds the initial distance to a minimizer. We also propose a new optimal subgradient method based on a linearly decaying sequence of step sizes, which achieves a rate for the last iterate equal to B R / sqrt{N+1}, matching the theoretical lower bound. Finally, we show that no universal step size sequence can attain this rate across all iterations, highlighting that the dependence of the step size sequence in N is unavoidable.

Related Resources