Computational models of auditory perception from feature extraction to stream segregation and behavior

James Rankin, John Rinzel

Research output: Contribution to journalReview article

Abstract

Audition is by nature dynamic, from brainstem processing on sub-millisecond time scales, to segregating and tracking sound sources with changing features, to the pleasure of listening to music and the satisfaction of getting the beat. We review recent advances from computational models of sound localization, of auditory stream segregation and of beat perception/generation. A wealth of behavioral, electrophysiological and imaging studies shed light on these processes, typically with synthesized sounds having regular temporal structure. Computational models integrate knowledge from different experimental fields and at different levels of description. We advocate a neuromechanistic modeling approach that incorporates knowledge of the auditory system from various fields, that utilizes plausible neural mechanisms, and that bridges our understanding across disciplines.

Original languageEnglish (US)
Pages (from-to)46-53
Number of pages8
JournalCurrent Opinion in Neurobiology
Volume58
DOIs
StatePublished - Oct 1 2019

Fingerprint

Auditory Perception
Sound Localization
Pleasure
Music
Hearing
Brain Stem

ASJC Scopus subject areas

  • Neuroscience(all)

Cite this

Computational models of auditory perception from feature extraction to stream segregation and behavior. / Rankin, James; Rinzel, John.

In: Current Opinion in Neurobiology, Vol. 58, 01.10.2019, p. 46-53.

Research output: Contribution to journalReview article

@article{c59e976a93344e06af39c33c978eb38a,
title = "Computational models of auditory perception from feature extraction to stream segregation and behavior",
abstract = "Audition is by nature dynamic, from brainstem processing on sub-millisecond time scales, to segregating and tracking sound sources with changing features, to the pleasure of listening to music and the satisfaction of getting the beat. We review recent advances from computational models of sound localization, of auditory stream segregation and of beat perception/generation. A wealth of behavioral, electrophysiological and imaging studies shed light on these processes, typically with synthesized sounds having regular temporal structure. Computational models integrate knowledge from different experimental fields and at different levels of description. We advocate a neuromechanistic modeling approach that incorporates knowledge of the auditory system from various fields, that utilizes plausible neural mechanisms, and that bridges our understanding across disciplines.",
author = "James Rankin and John Rinzel",
year = "2019",
month = "10",
day = "1",
doi = "10.1016/j.conb.2019.06.009",
language = "English (US)",
volume = "58",
pages = "46--53",
journal = "Current Opinion in Neurobiology",
issn = "0959-4388",
publisher = "Elsevier Limited",

}

TY - JOUR

T1 - Computational models of auditory perception from feature extraction to stream segregation and behavior

AU - Rankin, James

AU - Rinzel, John

PY - 2019/10/1

Y1 - 2019/10/1

N2 - Audition is by nature dynamic, from brainstem processing on sub-millisecond time scales, to segregating and tracking sound sources with changing features, to the pleasure of listening to music and the satisfaction of getting the beat. We review recent advances from computational models of sound localization, of auditory stream segregation and of beat perception/generation. A wealth of behavioral, electrophysiological and imaging studies shed light on these processes, typically with synthesized sounds having regular temporal structure. Computational models integrate knowledge from different experimental fields and at different levels of description. We advocate a neuromechanistic modeling approach that incorporates knowledge of the auditory system from various fields, that utilizes plausible neural mechanisms, and that bridges our understanding across disciplines.

AB - Audition is by nature dynamic, from brainstem processing on sub-millisecond time scales, to segregating and tracking sound sources with changing features, to the pleasure of listening to music and the satisfaction of getting the beat. We review recent advances from computational models of sound localization, of auditory stream segregation and of beat perception/generation. A wealth of behavioral, electrophysiological and imaging studies shed light on these processes, typically with synthesized sounds having regular temporal structure. Computational models integrate knowledge from different experimental fields and at different levels of description. We advocate a neuromechanistic modeling approach that incorporates knowledge of the auditory system from various fields, that utilizes plausible neural mechanisms, and that bridges our understanding across disciplines.

UR - http://www.scopus.com/inward/record.url?scp=85069587285&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85069587285&partnerID=8YFLogxK

U2 - 10.1016/j.conb.2019.06.009

DO - 10.1016/j.conb.2019.06.009

M3 - Review article

C2 - 31326723

AN - SCOPUS:85069587285

VL - 58

SP - 46

EP - 53

JO - Current Opinion in Neurobiology

JF - Current Opinion in Neurobiology

SN - 0959-4388

ER -