Saturating auto-encoders: International Conference on Learning Representations, ICLR 2013

Rostislav Goroshin, Yann LeCun

Research output: Contribution to conferencePaper

Abstract

We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.

Original languageEnglish (US)
StatePublished - Jan 1 2013
Event1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States
Duration: May 2 2013May 4 2013

Conference

Conference1st International Conference on Learning Representations, ICLR 2013
CountryUnited States
CityScottsdale
Period5/2/135/4/13

Fingerprint

activation
Chemical activation
learning
Activation
ability

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Goroshin, R., & LeCun, Y. (2013). Saturating auto-encoders: International Conference on Learning Representations, ICLR 2013. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.

Saturating auto-encoders : International Conference on Learning Representations, ICLR 2013. / Goroshin, Rostislav; LeCun, Yann.

2013. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.

Research output: Contribution to conferencePaper

Goroshin, R & LeCun, Y 2013, 'Saturating auto-encoders: International Conference on Learning Representations, ICLR 2013' Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States, 5/2/13 - 5/4/13, .
Goroshin R, LeCun Y. Saturating auto-encoders: International Conference on Learning Representations, ICLR 2013. 2013. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.
Goroshin, Rostislav ; LeCun, Yann. / Saturating auto-encoders : International Conference on Learning Representations, ICLR 2013. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.
@conference{f4718f7fab094bd1a730da68758fd2a2,
title = "Saturating auto-encoders: International Conference on Learning Representations, ICLR 2013",
abstract = "We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.",
author = "Rostislav Goroshin and Yann LeCun",
year = "2013",
month = "1",
day = "1",
language = "English (US)",
note = "1st International Conference on Learning Representations, ICLR 2013 ; Conference date: 02-05-2013 Through 04-05-2013",

}

TY - CONF

T1 - Saturating auto-encoders

T2 - International Conference on Learning Representations, ICLR 2013

AU - Goroshin, Rostislav

AU - LeCun, Yann

PY - 2013/1/1

Y1 - 2013/1/1

N2 - We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.

AB - We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.

UR - http://www.scopus.com/inward/record.url?scp=84973883855&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84973883855&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:84973883855

ER -