Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities

Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.

Original languageEnglish (US)
Title of host publicationNeural Information Processing - 20th International Conference, ICONIP 2013, Proceedings
Pages442-449
Number of pages8
Volume8226 LNCS
EditionPART 1
DOIs
StatePublished - 2013
Event20th International Conference on Neural Information Processing, ICONIP 2013 - Daegu, Korea, Republic of
Duration: Nov 3 2013Nov 7 2013

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume8226 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other20th International Conference on Neural Information Processing, ICONIP 2013
CountryKorea, Republic of
CityDaegu
Period11/3/1311/7/13

Fingerprint

Stochastic Gradient
Back Propagation
Backpropagation
Neurons
Nonlinearity
Neuron
Output
Optimization Methods
Zero
Multilayer neural networks
Normalize
Perceptron
Experiments
Experiment
Multilayer
Slope
Speedup
Continue
Transform
Learning

Keywords

  • Deep learning
  • Multi-layer perceptron network
  • Stochastic gradient

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Vatanen, T., Raiko, T., Valpola, H., & LeCun, Y. (2013). Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities. In Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings (PART 1 ed., Vol. 8226 LNCS, pp. 442-449). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 8226 LNCS, No. PART 1). https://doi.org/10.1007/978-3-642-42054-2_55

Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities. / Vatanen, Tommi; Raiko, Tapani; Valpola, Harri; LeCun, Yann.

Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings. Vol. 8226 LNCS PART 1. ed. 2013. p. 442-449 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 8226 LNCS, No. PART 1).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Vatanen, T, Raiko, T, Valpola, H & LeCun, Y 2013, Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities. in Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings. PART 1 edn, vol. 8226 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), no. PART 1, vol. 8226 LNCS, pp. 442-449, 20th International Conference on Neural Information Processing, ICONIP 2013, Daegu, Korea, Republic of, 11/3/13. https://doi.org/10.1007/978-3-642-42054-2_55
Vatanen T, Raiko T, Valpola H, LeCun Y. Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities. In Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings. PART 1 ed. Vol. 8226 LNCS. 2013. p. 442-449. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 1). https://doi.org/10.1007/978-3-642-42054-2_55
Vatanen, Tommi ; Raiko, Tapani ; Valpola, Harri ; LeCun, Yann. / Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities. Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings. Vol. 8226 LNCS PART 1. ed. 2013. pp. 442-449 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 1).
@inproceedings{4f9fa73c1c9f49f2961994fbd09fe607,
title = "Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities",
abstract = "Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.",
keywords = "Deep learning, Multi-layer perceptron network, Stochastic gradient",
author = "Tommi Vatanen and Tapani Raiko and Harri Valpola and Yann LeCun",
year = "2013",
doi = "10.1007/978-3-642-42054-2_55",
language = "English (US)",
isbn = "9783642420535",
volume = "8226 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
number = "PART 1",
pages = "442--449",
booktitle = "Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings",
edition = "PART 1",

}

TY - GEN

T1 - Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities

AU - Vatanen, Tommi

AU - Raiko, Tapani

AU - Valpola, Harri

AU - LeCun, Yann

PY - 2013

Y1 - 2013

N2 - Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.

AB - Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.

KW - Deep learning

KW - Multi-layer perceptron network

KW - Stochastic gradient

UR - http://www.scopus.com/inward/record.url?scp=84893419509&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84893419509&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-42054-2_55

DO - 10.1007/978-3-642-42054-2_55

M3 - Conference contribution

AN - SCOPUS:84893419509

SN - 9783642420535

VL - 8226 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 442

EP - 449

BT - Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings

ER -