Autoencoder-augmented neuroevolution for visual doom playing

Samuel Alvernaz, Julian Togelius

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Neuroevolution has proven effective at many re-inforcement learning tasks, including tasks with incomplete information and delayed rewards, but does not seem to scale well to high-dimensional controller representations, which are needed for tasks where the input is raw pixel data. We propose a novel method where we train an autoencoder to create a comparatively low-dimensional representation of the environment observation, and then use CMA-ES to train neural network controllers acting on this input data. As the behavior of the agent changes the nature of the input data, the autoencoder training progresses throughout evolution. We test this method in the VizDoom environment built on the classic FPS Doom, where it performs well on a health-pack gathering task.

    Original languageEnglish (US)
    Title of host publication2017 IEEE Conference on Computational Intelligence and Games, CIG 2017
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages1-8
    Number of pages8
    ISBN (Electronic)9781538632338
    DOIs
    StatePublished - Oct 23 2017
    Event2017 IEEE Conference on Computational Intelligence and Games, CIG 2017 - New York, United States
    Duration: Aug 22 2017Aug 25 2017

    Other

    Other2017 IEEE Conference on Computational Intelligence and Games, CIG 2017
    CountryUnited States
    CityNew York
    Period8/22/178/25/17

    Fingerprint

    Controllers
    Reinforcement learning
    Pixels
    Health
    Neural networks

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Human-Computer Interaction
    • Media Technology

    Cite this

    Alvernaz, S., & Togelius, J. (2017). Autoencoder-augmented neuroevolution for visual doom playing. In 2017 IEEE Conference on Computational Intelligence and Games, CIG 2017 (pp. 1-8). [8080408] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CIG.2017.8080408

    Autoencoder-augmented neuroevolution for visual doom playing. / Alvernaz, Samuel; Togelius, Julian.

    2017 IEEE Conference on Computational Intelligence and Games, CIG 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 1-8 8080408.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Alvernaz, S & Togelius, J 2017, Autoencoder-augmented neuroevolution for visual doom playing. in 2017 IEEE Conference on Computational Intelligence and Games, CIG 2017., 8080408, Institute of Electrical and Electronics Engineers Inc., pp. 1-8, 2017 IEEE Conference on Computational Intelligence and Games, CIG 2017, New York, United States, 8/22/17. https://doi.org/10.1109/CIG.2017.8080408
    Alvernaz S, Togelius J. Autoencoder-augmented neuroevolution for visual doom playing. In 2017 IEEE Conference on Computational Intelligence and Games, CIG 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 1-8. 8080408 https://doi.org/10.1109/CIG.2017.8080408
    Alvernaz, Samuel ; Togelius, Julian. / Autoencoder-augmented neuroevolution for visual doom playing. 2017 IEEE Conference on Computational Intelligence and Games, CIG 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 1-8
    @inproceedings{252fd07961cb4a64a81a7037b3e7e76d,
    title = "Autoencoder-augmented neuroevolution for visual doom playing",
    abstract = "Neuroevolution has proven effective at many re-inforcement learning tasks, including tasks with incomplete information and delayed rewards, but does not seem to scale well to high-dimensional controller representations, which are needed for tasks where the input is raw pixel data. We propose a novel method where we train an autoencoder to create a comparatively low-dimensional representation of the environment observation, and then use CMA-ES to train neural network controllers acting on this input data. As the behavior of the agent changes the nature of the input data, the autoencoder training progresses throughout evolution. We test this method in the VizDoom environment built on the classic FPS Doom, where it performs well on a health-pack gathering task.",
    author = "Samuel Alvernaz and Julian Togelius",
    year = "2017",
    month = "10",
    day = "23",
    doi = "10.1109/CIG.2017.8080408",
    language = "English (US)",
    pages = "1--8",
    booktitle = "2017 IEEE Conference on Computational Intelligence and Games, CIG 2017",
    publisher = "Institute of Electrical and Electronics Engineers Inc.",
    address = "United States",

    }

    TY - GEN

    T1 - Autoencoder-augmented neuroevolution for visual doom playing

    AU - Alvernaz, Samuel

    AU - Togelius, Julian

    PY - 2017/10/23

    Y1 - 2017/10/23

    N2 - Neuroevolution has proven effective at many re-inforcement learning tasks, including tasks with incomplete information and delayed rewards, but does not seem to scale well to high-dimensional controller representations, which are needed for tasks where the input is raw pixel data. We propose a novel method where we train an autoencoder to create a comparatively low-dimensional representation of the environment observation, and then use CMA-ES to train neural network controllers acting on this input data. As the behavior of the agent changes the nature of the input data, the autoencoder training progresses throughout evolution. We test this method in the VizDoom environment built on the classic FPS Doom, where it performs well on a health-pack gathering task.

    AB - Neuroevolution has proven effective at many re-inforcement learning tasks, including tasks with incomplete information and delayed rewards, but does not seem to scale well to high-dimensional controller representations, which are needed for tasks where the input is raw pixel data. We propose a novel method where we train an autoencoder to create a comparatively low-dimensional representation of the environment observation, and then use CMA-ES to train neural network controllers acting on this input data. As the behavior of the agent changes the nature of the input data, the autoencoder training progresses throughout evolution. We test this method in the VizDoom environment built on the classic FPS Doom, where it performs well on a health-pack gathering task.

    UR - http://www.scopus.com/inward/record.url?scp=85039980825&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85039980825&partnerID=8YFLogxK

    U2 - 10.1109/CIG.2017.8080408

    DO - 10.1109/CIG.2017.8080408

    M3 - Conference contribution

    SP - 1

    EP - 8

    BT - 2017 IEEE Conference on Computational Intelligence and Games, CIG 2017

    PB - Institute of Electrical and Electronics Engineers Inc.

    ER -