Supervised policy update for deep reinforcement learning

Quan Vuong, Keith Ross, Yiming Zhang

    Research output: Contribution to conferencePaper

    Abstract

    We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning. Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples. The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem. We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. The SPU implementation is much simpler than TRPO. In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.

    Original languageEnglish (US)
    StatePublished - Jan 1 2019
    Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
    Duration: May 6 2019May 9 2019

    Conference

    Conference7th International Conference on Learning Representations, ICLR 2019
    CountryUnited States
    CityNew Orleans
    Period5/6/195/9/19

    Fingerprint

    Reinforcement learning
    reinforcement
    learning
    methodology
    Constrained optimization
    policy implementation
    computer game
    Robotics
    Reinforcement Learning
    regression
    efficiency
    experiment
    Experiments

    ASJC Scopus subject areas

    • Education
    • Computer Science Applications
    • Linguistics and Language
    • Language and Linguistics

    Cite this

    Vuong, Q., Ross, K., & Zhang, Y. (2019). Supervised policy update for deep reinforcement learning. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

    Supervised policy update for deep reinforcement learning. / Vuong, Quan; Ross, Keith; Zhang, Yiming.

    2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

    Research output: Contribution to conferencePaper

    Vuong, Q, Ross, K & Zhang, Y 2019, 'Supervised policy update for deep reinforcement learning' Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States, 5/6/19 - 5/9/19, .
    Vuong Q, Ross K, Zhang Y. Supervised policy update for deep reinforcement learning. 2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
    Vuong, Quan ; Ross, Keith ; Zhang, Yiming. / Supervised policy update for deep reinforcement learning. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
    @conference{42dbd49d7357489795da0adf4a00c1c3,
    title = "Supervised policy update for deep reinforcement learning",
    abstract = "We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning. Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples. The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem. We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. The SPU implementation is much simpler than TRPO. In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.",
    author = "Quan Vuong and Keith Ross and Yiming Zhang",
    year = "2019",
    month = "1",
    day = "1",
    language = "English (US)",
    note = "7th International Conference on Learning Representations, ICLR 2019 ; Conference date: 06-05-2019 Through 09-05-2019",

    }

    TY - CONF

    T1 - Supervised policy update for deep reinforcement learning

    AU - Vuong, Quan

    AU - Ross, Keith

    AU - Zhang, Yiming

    PY - 2019/1/1

    Y1 - 2019/1/1

    N2 - We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning. Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples. The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem. We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. The SPU implementation is much simpler than TRPO. In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.

    AB - We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning. Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples. The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem. We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. The SPU implementation is much simpler than TRPO. In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.

    UR - http://www.scopus.com/inward/record.url?scp=85071162883&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85071162883&partnerID=8YFLogxK

    M3 - Paper

    ER -