Evolving in-game mood-expressive music with MetaCompose

Marco Scirea, Julian Togelius, Peter Eklund, Sebastian Risi

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.

    Original languageEnglish (US)
    Title of host publicationAudio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings
    PublisherAssociation for Computing Machinery
    ISBN (Electronic)9781450366090
    DOIs
    StatePublished - Sep 12 2018
    Event2018 International Audio Mostly Conference - A Conference on Interaction with Sound: Sound in Immersion and Emotion, AM 2018 - Wrexham, United Kingdom
    Duration: Sep 12 2018Sep 14 2018

    Other

    Other2018 International Audio Mostly Conference - A Conference on Interaction with Sound: Sound in Immersion and Emotion, AM 2018
    CountryUnited Kingdom
    CityWrexham
    Period9/12/189/14/18

    Fingerprint

    Multiobjective optimization
    Blood
    Wear of materials

    Keywords

    • Affective expression
    • Evolutionary algorithms
    • Music generation

    ASJC Scopus subject areas

    • Human-Computer Interaction
    • Computer Networks and Communications
    • Computer Vision and Pattern Recognition
    • Software

    Cite this

    Scirea, M., Togelius, J., Eklund, P., & Risi, S. (2018). Evolving in-game mood-expressive music with MetaCompose. In Audio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings [a8] Association for Computing Machinery. https://doi.org/10.1145/3243274.3243292

    Evolving in-game mood-expressive music with MetaCompose. / Scirea, Marco; Togelius, Julian; Eklund, Peter; Risi, Sebastian.

    Audio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings. Association for Computing Machinery, 2018. a8.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Scirea, M, Togelius, J, Eklund, P & Risi, S 2018, Evolving in-game mood-expressive music with MetaCompose. in Audio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings., a8, Association for Computing Machinery, 2018 International Audio Mostly Conference - A Conference on Interaction with Sound: Sound in Immersion and Emotion, AM 2018, Wrexham, United Kingdom, 9/12/18. https://doi.org/10.1145/3243274.3243292
    Scirea M, Togelius J, Eklund P, Risi S. Evolving in-game mood-expressive music with MetaCompose. In Audio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings. Association for Computing Machinery. 2018. a8 https://doi.org/10.1145/3243274.3243292
    Scirea, Marco ; Togelius, Julian ; Eklund, Peter ; Risi, Sebastian. / Evolving in-game mood-expressive music with MetaCompose. Audio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings. Association for Computing Machinery, 2018.
    @inproceedings{3053385bdf0d4f0d993860de01c21db4,
    title = "Evolving in-game mood-expressive music with MetaCompose",
    abstract = "MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.",
    keywords = "Affective expression, Evolutionary algorithms, Music generation",
    author = "Marco Scirea and Julian Togelius and Peter Eklund and Sebastian Risi",
    year = "2018",
    month = "9",
    day = "12",
    doi = "10.1145/3243274.3243292",
    language = "English (US)",
    booktitle = "Audio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings",
    publisher = "Association for Computing Machinery",

    }

    TY - GEN

    T1 - Evolving in-game mood-expressive music with MetaCompose

    AU - Scirea, Marco

    AU - Togelius, Julian

    AU - Eklund, Peter

    AU - Risi, Sebastian

    PY - 2018/9/12

    Y1 - 2018/9/12

    N2 - MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.

    AB - MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.

    KW - Affective expression

    KW - Evolutionary algorithms

    KW - Music generation

    UR - http://www.scopus.com/inward/record.url?scp=85060905429&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85060905429&partnerID=8YFLogxK

    U2 - 10.1145/3243274.3243292

    DO - 10.1145/3243274.3243292

    M3 - Conference contribution

    BT - Audio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings

    PB - Association for Computing Machinery

    ER -