EarGram

An application for interactive exploration of concatenative sound synthesis in pure data

Gilberto Bernardes, Carlos Guedes, Bruce Pennycook

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus.

Original languageEnglish (US)
Title of host publicationFrom Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers
Pages110-129
Number of pages20
DOIs
StatePublished - Oct 10 2013
Event9th International Symposium on Computer Music Modeling and Retrieval, CMMR 2012 - London, United Kingdom
Duration: Jun 19 2012Jun 22 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7900 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other9th International Symposium on Computer Music Modeling and Retrieval, CMMR 2012
CountryUnited Kingdom
CityLondon
Period6/19/126/22/12

Fingerprint

Acoustic waves
Synthesis
Music
Descriptors
Learning systems
Machine Learning
Visualization
Decision Making
Real-time
Output
Corpus
Sound
Strategy

Keywords

  • Concatenative sound synthesis
  • generative music
  • recombination

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Bernardes, G., Guedes, C., & Pennycook, B. (2013). EarGram: An application for interactive exploration of concatenative sound synthesis in pure data. In From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers (pp. 110-129). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7900 LNCS). https://doi.org/10.1007/978-3-642-41248-6_7

EarGram : An application for interactive exploration of concatenative sound synthesis in pure data. / Bernardes, Gilberto; Guedes, Carlos; Pennycook, Bruce.

From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers. 2013. p. 110-129 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7900 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Bernardes, G, Guedes, C & Pennycook, B 2013, EarGram: An application for interactive exploration of concatenative sound synthesis in pure data. in From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 7900 LNCS, pp. 110-129, 9th International Symposium on Computer Music Modeling and Retrieval, CMMR 2012, London, United Kingdom, 6/19/12. https://doi.org/10.1007/978-3-642-41248-6_7
Bernardes G, Guedes C, Pennycook B. EarGram: An application for interactive exploration of concatenative sound synthesis in pure data. In From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers. 2013. p. 110-129. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-642-41248-6_7
Bernardes, Gilberto ; Guedes, Carlos ; Pennycook, Bruce. / EarGram : An application for interactive exploration of concatenative sound synthesis in pure data. From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers. 2013. pp. 110-129 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{22a4766181a34862922780c24844ca63,
title = "EarGram: An application for interactive exploration of concatenative sound synthesis in pure data",
abstract = "This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus.",
keywords = "Concatenative sound synthesis, generative music, recombination",
author = "Gilberto Bernardes and Carlos Guedes and Bruce Pennycook",
year = "2013",
month = "10",
day = "10",
doi = "10.1007/978-3-642-41248-6_7",
language = "English (US)",
isbn = "9783642412479",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "110--129",
booktitle = "From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers",

}

TY - GEN

T1 - EarGram

T2 - An application for interactive exploration of concatenative sound synthesis in pure data

AU - Bernardes, Gilberto

AU - Guedes, Carlos

AU - Pennycook, Bruce

PY - 2013/10/10

Y1 - 2013/10/10

N2 - This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus.

AB - This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus.

KW - Concatenative sound synthesis

KW - generative music

KW - recombination

UR - http://www.scopus.com/inward/record.url?scp=84885032169&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84885032169&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-41248-6_7

DO - 10.1007/978-3-642-41248-6_7

M3 - Conference contribution

SN - 9783642412479

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 110

EP - 129

BT - From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers

ER -