Evolving Game Skill-Depth using General Video Game AI agents

Jialin Liu, Julian Togelius, Diego Perez-Liebana, Simon M. Lucas

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Most games have, or can be generalised to have, a number of parameters that may be varied in order to provide instances of games that lead to very different player experiences. The space of possible parameter settings can be seen as a search space, and we can therefore use a Random Mutation Hill Climbing algorithm or other search methods to find the parameter settings that induce the best games. One of the hardest parts of this approach is defining a suitable fitness function. In this paper we explore the possibility of using one of a growing set of General Video Game AI agents to perform automatic play-testing. This enables a very general approach to game evaluation based on estimating the skill-depth of a game. Agent-based play-testing is computationally expensive, so we compare two simple but efficient optimisation algorithms: the Random Mutation Hill-Climber and the Multi-Armed Bandit Random Mutation Hill-Climber. For the test game we use a space-battle game in order to provide a suitable balance between simulation speed and potential skill-depth. Results show that both algorithms are able to rapidly evolve game versions with significant skill-depth, but that choosing a suitable resampling number is essential in order to combat the effects of noise.

    Original languageEnglish (US)
    Title of host publication2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages2299-2307
    Number of pages9
    ISBN (Electronic)9781509046010
    DOIs
    StatePublished - Jul 5 2017
    Event2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Donostia-San Sebastian, Spain
    Duration: Jun 5 2017Jun 8 2017

    Other

    Other2017 IEEE Congress on Evolutionary Computation, CEC 2017
    CountrySpain
    CityDonostia-San Sebastian
    Period6/5/176/8/17

    Fingerprint

    Automatic testing
    Testing

    Keywords

    • Automatic game design
    • Game tuning
    • GVG-AI
    • Optimisation
    • RMHC

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Computer Networks and Communications
    • Computer Science Applications
    • Signal Processing

    Cite this

    Liu, J., Togelius, J., Perez-Liebana, D., & Lucas, S. M. (2017). Evolving Game Skill-Depth using General Video Game AI agents. In 2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings (pp. 2299-2307). [7969583] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CEC.2017.7969583

    Evolving Game Skill-Depth using General Video Game AI agents. / Liu, Jialin; Togelius, Julian; Perez-Liebana, Diego; Lucas, Simon M.

    2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2017. p. 2299-2307 7969583.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Liu, J, Togelius, J, Perez-Liebana, D & Lucas, SM 2017, Evolving Game Skill-Depth using General Video Game AI agents. in 2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings., 7969583, Institute of Electrical and Electronics Engineers Inc., pp. 2299-2307, 2017 IEEE Congress on Evolutionary Computation, CEC 2017, Donostia-San Sebastian, Spain, 6/5/17. https://doi.org/10.1109/CEC.2017.7969583
    Liu J, Togelius J, Perez-Liebana D, Lucas SM. Evolving Game Skill-Depth using General Video Game AI agents. In 2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2017. p. 2299-2307. 7969583 https://doi.org/10.1109/CEC.2017.7969583
    Liu, Jialin ; Togelius, Julian ; Perez-Liebana, Diego ; Lucas, Simon M. / Evolving Game Skill-Depth using General Video Game AI agents. 2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 2299-2307
    @inproceedings{c5b53c38f17649f29df7f5ab330fdc85,
    title = "Evolving Game Skill-Depth using General Video Game AI agents",
    abstract = "Most games have, or can be generalised to have, a number of parameters that may be varied in order to provide instances of games that lead to very different player experiences. The space of possible parameter settings can be seen as a search space, and we can therefore use a Random Mutation Hill Climbing algorithm or other search methods to find the parameter settings that induce the best games. One of the hardest parts of this approach is defining a suitable fitness function. In this paper we explore the possibility of using one of a growing set of General Video Game AI agents to perform automatic play-testing. This enables a very general approach to game evaluation based on estimating the skill-depth of a game. Agent-based play-testing is computationally expensive, so we compare two simple but efficient optimisation algorithms: the Random Mutation Hill-Climber and the Multi-Armed Bandit Random Mutation Hill-Climber. For the test game we use a space-battle game in order to provide a suitable balance between simulation speed and potential skill-depth. Results show that both algorithms are able to rapidly evolve game versions with significant skill-depth, but that choosing a suitable resampling number is essential in order to combat the effects of noise.",
    keywords = "Automatic game design, Game tuning, GVG-AI, Optimisation, RMHC",
    author = "Jialin Liu and Julian Togelius and Diego Perez-Liebana and Lucas, {Simon M.}",
    year = "2017",
    month = "7",
    day = "5",
    doi = "10.1109/CEC.2017.7969583",
    language = "English (US)",
    pages = "2299--2307",
    booktitle = "2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings",
    publisher = "Institute of Electrical and Electronics Engineers Inc.",
    address = "United States",

    }

    TY - GEN

    T1 - Evolving Game Skill-Depth using General Video Game AI agents

    AU - Liu, Jialin

    AU - Togelius, Julian

    AU - Perez-Liebana, Diego

    AU - Lucas, Simon M.

    PY - 2017/7/5

    Y1 - 2017/7/5

    N2 - Most games have, or can be generalised to have, a number of parameters that may be varied in order to provide instances of games that lead to very different player experiences. The space of possible parameter settings can be seen as a search space, and we can therefore use a Random Mutation Hill Climbing algorithm or other search methods to find the parameter settings that induce the best games. One of the hardest parts of this approach is defining a suitable fitness function. In this paper we explore the possibility of using one of a growing set of General Video Game AI agents to perform automatic play-testing. This enables a very general approach to game evaluation based on estimating the skill-depth of a game. Agent-based play-testing is computationally expensive, so we compare two simple but efficient optimisation algorithms: the Random Mutation Hill-Climber and the Multi-Armed Bandit Random Mutation Hill-Climber. For the test game we use a space-battle game in order to provide a suitable balance between simulation speed and potential skill-depth. Results show that both algorithms are able to rapidly evolve game versions with significant skill-depth, but that choosing a suitable resampling number is essential in order to combat the effects of noise.

    AB - Most games have, or can be generalised to have, a number of parameters that may be varied in order to provide instances of games that lead to very different player experiences. The space of possible parameter settings can be seen as a search space, and we can therefore use a Random Mutation Hill Climbing algorithm or other search methods to find the parameter settings that induce the best games. One of the hardest parts of this approach is defining a suitable fitness function. In this paper we explore the possibility of using one of a growing set of General Video Game AI agents to perform automatic play-testing. This enables a very general approach to game evaluation based on estimating the skill-depth of a game. Agent-based play-testing is computationally expensive, so we compare two simple but efficient optimisation algorithms: the Random Mutation Hill-Climber and the Multi-Armed Bandit Random Mutation Hill-Climber. For the test game we use a space-battle game in order to provide a suitable balance between simulation speed and potential skill-depth. Results show that both algorithms are able to rapidly evolve game versions with significant skill-depth, but that choosing a suitable resampling number is essential in order to combat the effects of noise.

    KW - Automatic game design

    KW - Game tuning

    KW - GVG-AI

    KW - Optimisation

    KW - RMHC

    UR - http://www.scopus.com/inward/record.url?scp=85027835900&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85027835900&partnerID=8YFLogxK

    U2 - 10.1109/CEC.2017.7969583

    DO - 10.1109/CEC.2017.7969583

    M3 - Conference contribution

    SP - 2299

    EP - 2307

    BT - 2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings

    PB - Institute of Electrical and Electronics Engineers Inc.

    ER -