Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients

Tom Schaul, Yann LeCun

Research output: Contribution to conferencePaper

Abstract

Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.

Original languageEnglish (US)
StatePublished - Jan 1 2013
Event1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States
Duration: May 2 2013May 4 2013

Conference

Conference1st International Conference on Learning Representations, ICLR 2013
CountryUnited States
CityScottsdale
Period5/2/135/4/13

Fingerprint

Tuning
learning
estimation procedure
time
Hessian
Robustness
Descent
Approximation

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Schaul, T., & LeCun, Y. (2013). Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.

Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients. / Schaul, Tom; LeCun, Yann.

2013. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.

Research output: Contribution to conferencePaper

Schaul, T & LeCun, Y 2013, 'Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients' Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States, 5/2/13 - 5/4/13, .
Schaul T, LeCun Y. Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients. 2013. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.
Schaul, Tom ; LeCun, Yann. / Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.
@conference{2e6090971c624633b86765f346d80b79,
title = "Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients",
abstract = "Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.",
author = "Tom Schaul and Yann LeCun",
year = "2013",
month = "1",
day = "1",
language = "English (US)",
note = "1st International Conference on Learning Representations, ICLR 2013 ; Conference date: 02-05-2013 Through 04-05-2013",

}

TY - CONF

T1 - Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients

AU - Schaul, Tom

AU - LeCun, Yann

PY - 2013/1/1

Y1 - 2013/1/1

N2 - Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.

AB - Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.

UR - http://www.scopus.com/inward/record.url?scp=85065083329&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85065083329&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85065083329

ER -