Learning to linearize under uncertainty

Ross Goroshin, Michael Mathieu, Yann LeCun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision. However, a principled way in which to train such hierarchies in the unsupervised setting has remained elusive. In this work we suggest a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences. This is done by training a generative model to predict video frames. We also address the problem of inherent uncertainty in prediction by introducing latent variables that are non-deterministic functions of the input into the network architecture.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems
PublisherNeural information processing systems foundation
Pages1234-1242
Number of pages9
Volume2015-January
StatePublished - 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015

Other

Other29th Annual Conference on Neural Information Processing Systems, NIPS 2015
CountryCanada
CityMontreal
Period12/7/1512/12/15

Fingerprint

Supervised learning
Network architecture
Computer vision
Uncertainty

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Goroshin, R., Mathieu, M., & LeCun, Y. (2015). Learning to linearize under uncertainty. In Advances in Neural Information Processing Systems (Vol. 2015-January, pp. 1234-1242). Neural information processing systems foundation.

Learning to linearize under uncertainty. / Goroshin, Ross; Mathieu, Michael; LeCun, Yann.

Advances in Neural Information Processing Systems. Vol. 2015-January Neural information processing systems foundation, 2015. p. 1234-1242.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Goroshin, R, Mathieu, M & LeCun, Y 2015, Learning to linearize under uncertainty. in Advances in Neural Information Processing Systems. vol. 2015-January, Neural information processing systems foundation, pp. 1234-1242, 29th Annual Conference on Neural Information Processing Systems, NIPS 2015, Montreal, Canada, 12/7/15.
Goroshin R, Mathieu M, LeCun Y. Learning to linearize under uncertainty. In Advances in Neural Information Processing Systems. Vol. 2015-January. Neural information processing systems foundation. 2015. p. 1234-1242
Goroshin, Ross ; Mathieu, Michael ; LeCun, Yann. / Learning to linearize under uncertainty. Advances in Neural Information Processing Systems. Vol. 2015-January Neural information processing systems foundation, 2015. pp. 1234-1242
@inproceedings{9a7db9d51e4a4cb0819b881922841e72,
title = "Learning to linearize under uncertainty",
abstract = "Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision. However, a principled way in which to train such hierarchies in the unsupervised setting has remained elusive. In this work we suggest a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences. This is done by training a generative model to predict video frames. We also address the problem of inherent uncertainty in prediction by introducing latent variables that are non-deterministic functions of the input into the network architecture.",
author = "Ross Goroshin and Michael Mathieu and Yann LeCun",
year = "2015",
language = "English (US)",
volume = "2015-January",
pages = "1234--1242",
booktitle = "Advances in Neural Information Processing Systems",
publisher = "Neural information processing systems foundation",

}

TY - GEN

T1 - Learning to linearize under uncertainty

AU - Goroshin, Ross

AU - Mathieu, Michael

AU - LeCun, Yann

PY - 2015

Y1 - 2015

N2 - Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision. However, a principled way in which to train such hierarchies in the unsupervised setting has remained elusive. In this work we suggest a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences. This is done by training a generative model to predict video frames. We also address the problem of inherent uncertainty in prediction by introducing latent variables that are non-deterministic functions of the input into the network architecture.

AB - Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision. However, a principled way in which to train such hierarchies in the unsupervised setting has remained elusive. In this work we suggest a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences. This is done by training a generative model to predict video frames. We also address the problem of inherent uncertainty in prediction by introducing latent variables that are non-deterministic functions of the input into the network architecture.

UR - http://www.scopus.com/inward/record.url?scp=84965139813&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84965139813&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84965139813

VL - 2015-January

SP - 1234

EP - 1242

BT - Advances in Neural Information Processing Systems

PB - Neural information processing systems foundation

ER -