Identifying and attacking the saddle point problem in high-dimensional non-convex optimization

Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, Yoshua Bengio

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. Weapply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems
PublisherNeural information processing systems foundation
Pages2933-2941
Number of pages9
Volume4
EditionJanuary
StatePublished - 2014
Event28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada
Duration: Dec 8 2014Dec 13 2014

Other

Other28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014
CountryCanada
CityMontreal
Period12/8/1412/13/14

Fingerprint

Newton-Raphson method
Recurrent neural networks
Circuit theory
Physics
Neural networks

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Dauphin, Y. N., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., & Bengio, Y. (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in Neural Information Processing Systems (January ed., Vol. 4, pp. 2933-2941). Neural information processing systems foundation.

Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. / Dauphin, Yann N.; Pascanu, Razvan; Gulcehre, Caglar; Cho, Kyunghyun; Ganguli, Surya; Bengio, Yoshua.

Advances in Neural Information Processing Systems. Vol. 4 January. ed. Neural information processing systems foundation, 2014. p. 2933-2941.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Dauphin, YN, Pascanu, R, Gulcehre, C, Cho, K, Ganguli, S & Bengio, Y 2014, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. in Advances in Neural Information Processing Systems. January edn, vol. 4, Neural information processing systems foundation, pp. 2933-2941, 28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014, Montreal, Canada, 12/8/14.
Dauphin YN, Pascanu R, Gulcehre C, Cho K, Ganguli S, Bengio Y. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in Neural Information Processing Systems. January ed. Vol. 4. Neural information processing systems foundation. 2014. p. 2933-2941
Dauphin, Yann N. ; Pascanu, Razvan ; Gulcehre, Caglar ; Cho, Kyunghyun ; Ganguli, Surya ; Bengio, Yoshua. / Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. Advances in Neural Information Processing Systems. Vol. 4 January. ed. Neural information processing systems foundation, 2014. pp. 2933-2941
@inproceedings{e9f6260330984937a4d4d24db6d1517e,
title = "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization",
abstract = "A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. Weapply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.",
author = "Dauphin, {Yann N.} and Razvan Pascanu and Caglar Gulcehre and Kyunghyun Cho and Surya Ganguli and Yoshua Bengio",
year = "2014",
language = "English (US)",
volume = "4",
pages = "2933--2941",
booktitle = "Advances in Neural Information Processing Systems",
publisher = "Neural information processing systems foundation",
edition = "January",

}

TY - GEN

T1 - Identifying and attacking the saddle point problem in high-dimensional non-convex optimization

AU - Dauphin, Yann N.

AU - Pascanu, Razvan

AU - Gulcehre, Caglar

AU - Cho, Kyunghyun

AU - Ganguli, Surya

AU - Bengio, Yoshua

PY - 2014

Y1 - 2014

N2 - A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. Weapply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.

AB - A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. Weapply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.

UR - http://www.scopus.com/inward/record.url?scp=84928534967&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84928534967&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84928534967

VL - 4

SP - 2933

EP - 2941

BT - Advances in Neural Information Processing Systems

PB - Neural information processing systems foundation

ER -