Learning to linearize under uncertainty

Ross Goroshin, Michael Mathieu, Yann LeCun

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision. However, a principled way in which to train such hierarchies in the unsupervised setting has remained elusive. In this work we suggest a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences. This is done by training a generative model to predict video frames. We also address the problem of inherent uncertainty in prediction by introducing latent variables that are non-deterministic functions of the input into the network architecture.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems
PublisherNeural information processing systems foundation
Number of pages9
StatePublished - 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015


Other29th Annual Conference on Neural Information Processing Systems, NIPS 2015


ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Goroshin, R., Mathieu, M., & LeCun, Y. (2015). Learning to linearize under uncertainty. In Advances in Neural Information Processing Systems (Vol. 2015-January, pp. 1234-1242). Neural information processing systems foundation.