Understanding dropout

Training multi-layer perceptrons with auxiliary independent stochastic neurons

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout [5], explicit noise injection [12,3] and semantic hashing [10]. Under the proposed framework, an extension of dropout which allows using separate dropping probabilities for different hidden neurons, or layers, is found to be available. The use of different dropping probabilities for hidden layers separately is empirically investigated.

Original languageEnglish (US)
Title of host publicationNeural Information Processing - 20th International Conference, ICONIP 2013, Proceedings
Pages474-481
Number of pages8
Volume8226 LNCS
EditionPART 1
DOIs
StatePublished - 2013
Event20th International Conference on Neural Information Processing, ICONIP 2013 - Daegu, Korea, Republic of
Duration: Nov 3 2013Nov 7 2013

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume8226 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other20th International Conference on Neural Information Processing, ICONIP 2013
CountryKorea, Republic of
CityDaegu
Period11/3/1311/7/13

Fingerprint

Drop out
Multilayer neural networks
Perceptron
Neurons
Multilayer
Neuron
Semantics
Hashing
Injection
Training

Keywords

  • Deep learning
  • Dropout
  • Multi-layer perceptron
  • Stochastic neuron

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Cho, K. (2013). Understanding dropout: Training multi-layer perceptrons with auxiliary independent stochastic neurons. In Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings (PART 1 ed., Vol. 8226 LNCS, pp. 474-481). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 8226 LNCS, No. PART 1). https://doi.org/10.1007/978-3-642-42054-2_59

Understanding dropout : Training multi-layer perceptrons with auxiliary independent stochastic neurons. / Cho, Kyunghyun.

Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings. Vol. 8226 LNCS PART 1. ed. 2013. p. 474-481 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 8226 LNCS, No. PART 1).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Cho, K 2013, Understanding dropout: Training multi-layer perceptrons with auxiliary independent stochastic neurons. in Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings. PART 1 edn, vol. 8226 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), no. PART 1, vol. 8226 LNCS, pp. 474-481, 20th International Conference on Neural Information Processing, ICONIP 2013, Daegu, Korea, Republic of, 11/3/13. https://doi.org/10.1007/978-3-642-42054-2_59
Cho K. Understanding dropout: Training multi-layer perceptrons with auxiliary independent stochastic neurons. In Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings. PART 1 ed. Vol. 8226 LNCS. 2013. p. 474-481. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 1). https://doi.org/10.1007/978-3-642-42054-2_59
Cho, Kyunghyun. / Understanding dropout : Training multi-layer perceptrons with auxiliary independent stochastic neurons. Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings. Vol. 8226 LNCS PART 1. ed. 2013. pp. 474-481 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 1).
@inproceedings{d988b77018184fb5b49aa4fc71eec32a,
title = "Understanding dropout: Training multi-layer perceptrons with auxiliary independent stochastic neurons",
abstract = "In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout [5], explicit noise injection [12,3] and semantic hashing [10]. Under the proposed framework, an extension of dropout which allows using separate dropping probabilities for different hidden neurons, or layers, is found to be available. The use of different dropping probabilities for hidden layers separately is empirically investigated.",
keywords = "Deep learning, Dropout, Multi-layer perceptron, Stochastic neuron",
author = "Kyunghyun Cho",
year = "2013",
doi = "10.1007/978-3-642-42054-2_59",
language = "English (US)",
isbn = "9783642420535",
volume = "8226 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
number = "PART 1",
pages = "474--481",
booktitle = "Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings",
edition = "PART 1",

}

TY - GEN

T1 - Understanding dropout

T2 - Training multi-layer perceptrons with auxiliary independent stochastic neurons

AU - Cho, Kyunghyun

PY - 2013

Y1 - 2013

N2 - In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout [5], explicit noise injection [12,3] and semantic hashing [10]. Under the proposed framework, an extension of dropout which allows using separate dropping probabilities for different hidden neurons, or layers, is found to be available. The use of different dropping probabilities for hidden layers separately is empirically investigated.

AB - In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout [5], explicit noise injection [12,3] and semantic hashing [10]. Under the proposed framework, an extension of dropout which allows using separate dropping probabilities for different hidden neurons, or layers, is found to be available. The use of different dropping probabilities for hidden layers separately is empirically investigated.

KW - Deep learning

KW - Dropout

KW - Multi-layer perceptron

KW - Stochastic neuron

UR - http://www.scopus.com/inward/record.url?scp=84893355675&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84893355675&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-42054-2_59

DO - 10.1007/978-3-642-42054-2_59

M3 - Conference contribution

SN - 9783642420535

VL - 8226 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 474

EP - 481

BT - Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings

ER -