Model-predictive policy learning with uncertainty regularization for driving in dense traffic

Mikael Henaff, Yann LeCun, Alfredo Canziani

Research output: Contribution to conferencePaper

Abstract

Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. We propose to train a policy by unrolling a learned model of the environment dynamics over multiple time steps while explicitly penalizing two costs: the original cost the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction.

Original languageEnglish (US)
StatePublished - Jan 1 2019
Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
Duration: May 6 2019May 9 2019

Conference

Conference7th International Conference on Learning Representations, ICLR 2019
CountryUnited States
CityNew Orleans
Period5/6/195/9/19

Fingerprint

predictive model
uncertainty
traffic
costs
learning
Costs
traffic behavior
divergence
Dynamic models
Cameras
Uncertainty
Traffic
interaction
time

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Henaff, M., LeCun, Y., & Canziani, A. (2019). Model-predictive policy learning with uncertainty regularization for driving in dense traffic. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

Model-predictive policy learning with uncertainty regularization for driving in dense traffic. / Henaff, Mikael; LeCun, Yann; Canziani, Alfredo.

2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

Research output: Contribution to conferencePaper

Henaff, M, LeCun, Y & Canziani, A 2019, 'Model-predictive policy learning with uncertainty regularization for driving in dense traffic', Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States, 5/6/19 - 5/9/19.
Henaff M, LeCun Y, Canziani A. Model-predictive policy learning with uncertainty regularization for driving in dense traffic. 2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
Henaff, Mikael ; LeCun, Yann ; Canziani, Alfredo. / Model-predictive policy learning with uncertainty regularization for driving in dense traffic. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
@conference{4ed899b8abfa4fd4b5e310408c1b97dd,
title = "Model-predictive policy learning with uncertainty regularization for driving in dense traffic",
abstract = "Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. We propose to train a policy by unrolling a learned model of the environment dynamics over multiple time steps while explicitly penalizing two costs: the original cost the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction.",
author = "Mikael Henaff and Yann LeCun and Alfredo Canziani",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
note = "7th International Conference on Learning Representations, ICLR 2019 ; Conference date: 06-05-2019 Through 09-05-2019",

}

TY - CONF

T1 - Model-predictive policy learning with uncertainty regularization for driving in dense traffic

AU - Henaff, Mikael

AU - LeCun, Yann

AU - Canziani, Alfredo

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. We propose to train a policy by unrolling a learned model of the environment dynamics over multiple time steps while explicitly penalizing two costs: the original cost the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction.

AB - Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. We propose to train a policy by unrolling a learned model of the environment dynamics over multiple time steps while explicitly penalizing two costs: the original cost the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction.

UR - http://www.scopus.com/inward/record.url?scp=85063518522&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063518522&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85063518522

ER -