Enhanced gradient for training restricted Boltzmann machines

Kyunghyun Cho, Tapani Raiko, Alexander Ilin

Research output: Contribution to journalArticle


Restricted Boltzmann machines (RBMs) are often used as building blocks in greedy learning of deep networks.However, training this simplemodel can be laborious. Traditional learning algorithms often converge only with the right choice of metaparameters that specify, for example, learning rate scheduling and the scale of the initial weights. They are also sensitive to specific data representation. An equivalentRBMcan be obtained by flipping some bits and changing the weights and biases accordingly, but traditional learning rules are not invariant to such transformations. Without careful tuning of these training settings, traditional algorithms can easily get stuck or even diverge. In this letter, we present an enhanced gradient that is derived to be invariant to bit-flipping transformations.We experimentally show that the enhanced gradient yields more stable training of RBMs both when used with a fixed learning rate and an adaptive one.

Original languageEnglish (US)
Pages (from-to)805-831
Number of pages27
JournalNeural Computation
Issue number3
Publication statusPublished - 2013


ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Arts and Humanities (miscellaneous)

Cite this