Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions

Research output: Contribution to journalArticle

Abstract

It has long been known that the gradient (steepest descent) method may fail on non-smooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a gradient or subgradient method with an exact line search or are unstable with respect to perturbation of the initial point. We give an analysis of the gradient method with steplengths satisfying the Armijo and Wolfe inexact line search conditions on the non-smooth convex function f (x) = a|x(1)| + ∑n i=2 x(i). We show that if a is sufficiently large, satisfying a condition that depends only on the Armijo parameter, then, when the method is initiated at any point x0 ϵ Rn with (x(1) 0 ≠ 0), the iterates converge to a point (Formula presented.) with (Formula presented.), although f is unbounded below. We also give conditions under which the iterates f (xk)→−∞,), using a specific Armijo–Wolfe bracketing line search. Our experimental results demonstrate that our analysis is reasonably tight.

Original languageEnglish (US)
JournalOptimization Methods and Software
DOIs
StateAccepted/In press - Jan 1 2019

Fingerprint

Steepest descent method
Nonsmooth Function
Gradient methods
Line Search
Gradient Method
Convex function
Iterate
Inexact Line Search
Gradient Descent Method
Subgradient Method
Steepest Descent Method
Unstable
Perturbation
Converge
Experimental Results
Demonstrate
Class

Keywords

  • convex optimization
  • non-smooth optimization
  • Steepest descent method

ASJC Scopus subject areas

  • Software
  • Control and Optimization
  • Applied Mathematics

Cite this

@article{38749db7e6ab4f11add0de5e74e47181,
title = "Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions",
abstract = "It has long been known that the gradient (steepest descent) method may fail on non-smooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a gradient or subgradient method with an exact line search or are unstable with respect to perturbation of the initial point. We give an analysis of the gradient method with steplengths satisfying the Armijo and Wolfe inexact line search conditions on the non-smooth convex function f (x) = a|x(1)| + ∑n i=2 x(i). We show that if a is sufficiently large, satisfying a condition that depends only on the Armijo parameter, then, when the method is initiated at any point x0 ϵ Rn with (x(1) 0 ≠ 0), the iterates converge to a point (Formula presented.) with (Formula presented.), although f is unbounded below. We also give conditions under which the iterates f (xk)→−∞,), using a specific Armijo–Wolfe bracketing line search. Our experimental results demonstrate that our analysis is reasonably tight.",
keywords = "convex optimization, non-smooth optimization, Steepest descent method",
author = "Azam Asl and Overton, {Michael L.}",
year = "2019",
month = "1",
day = "1",
doi = "10.1080/10556788.2019.1673388",
language = "English (US)",
journal = "Optimization Methods and Software",
issn = "1055-6788",
publisher = "Taylor and Francis Ltd.",

}

TY - JOUR

T1 - Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions

AU - Asl, Azam

AU - Overton, Michael L.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - It has long been known that the gradient (steepest descent) method may fail on non-smooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a gradient or subgradient method with an exact line search or are unstable with respect to perturbation of the initial point. We give an analysis of the gradient method with steplengths satisfying the Armijo and Wolfe inexact line search conditions on the non-smooth convex function f (x) = a|x(1)| + ∑n i=2 x(i). We show that if a is sufficiently large, satisfying a condition that depends only on the Armijo parameter, then, when the method is initiated at any point x0 ϵ Rn with (x(1) 0 ≠ 0), the iterates converge to a point (Formula presented.) with (Formula presented.), although f is unbounded below. We also give conditions under which the iterates f (xk)→−∞,), using a specific Armijo–Wolfe bracketing line search. Our experimental results demonstrate that our analysis is reasonably tight.

AB - It has long been known that the gradient (steepest descent) method may fail on non-smooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a gradient or subgradient method with an exact line search or are unstable with respect to perturbation of the initial point. We give an analysis of the gradient method with steplengths satisfying the Armijo and Wolfe inexact line search conditions on the non-smooth convex function f (x) = a|x(1)| + ∑n i=2 x(i). We show that if a is sufficiently large, satisfying a condition that depends only on the Armijo parameter, then, when the method is initiated at any point x0 ϵ Rn with (x(1) 0 ≠ 0), the iterates converge to a point (Formula presented.) with (Formula presented.), although f is unbounded below. We also give conditions under which the iterates f (xk)→−∞,), using a specific Armijo–Wolfe bracketing line search. Our experimental results demonstrate that our analysis is reasonably tight.

KW - convex optimization

KW - non-smooth optimization

KW - Steepest descent method

UR - http://www.scopus.com/inward/record.url?scp=85074030377&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85074030377&partnerID=8YFLogxK

U2 - 10.1080/10556788.2019.1673388

DO - 10.1080/10556788.2019.1673388

M3 - Article

AN - SCOPUS:85074030377

JO - Optimization Methods and Software

JF - Optimization Methods and Software

SN - 1055-6788

ER -