On the localization of complex sounds: Temporal encoding based on input-slope coincidence detection of envelopes

Yan Gai, Vibhakar C. Kotak, Dan H. Sanes, John Rinzel

Research output: Contribution to journalArticle

Abstract

Behavioral and neural findings demonstrate that animals can locate low-frequency sounds along the azimuth by detecting microsecond interaural time differences (ITDs). Information about ITDs is also available in the amplitude modulations (i.e., envelope) of high-frequency sounds. Since medial superior olivary (MSO) neurons encode low-frequency ITDs, we asked whether they employ a similar mechanism to process envelope ITDs with high-frequency carriers, and the effectiveness of this mechanism compared with the process of low-frequency sound. We developed a novel hybrid in vitro dynamic-clamp approach, which enabled us to mimic synaptic input to brain-slice neurons in response to virtual sound and to create conditions that cannot be achieved naturally but are useful for testing our hypotheses. For each simulated ear, a virtual sound, computer generated, was used as input to a computational auditory-nerve model. Model spike times were converted into synaptic input for MSO neurons, and ITD tuning curves were derived for several virtual-sound conditions: low-frequency pure tones, high-frequency tones modulated with two types of envelope, and speech sequences. Computational models were used to verify the physiological findings and explain the biophysical mechanism underlying the observed ITD coding. Both recordings and simulations indicate that MSO neurons are sensitive to ITDs carried by spectrotemporally complex virtual sounds, including speech tokens. Our findings strongly suggest that MSO neurons can encode ITDs across a broad-frequency spectrum using an input-slope-based coincidence-detection mechanism. Our data also provide an explanation at the cellular level for human localization performance involving high-frequency sound described by previous investigators.

Original languageEnglish (US)
Pages (from-to)802-813
Number of pages12
JournalJournal of Neurophysiology
Volume112
Issue number4
DOIs
StatePublished - Aug 15 2014

Fingerprint

Sound Localization
Neurons
Cochlear Nerve
Phonetics
Ear
Research Personnel

Keywords

  • Auditory processing
  • Kv1.1
  • Phasic firing
  • Sound localization

ASJC Scopus subject areas

  • Physiology
  • Neuroscience(all)
  • Medicine(all)

Cite this

On the localization of complex sounds : Temporal encoding based on input-slope coincidence detection of envelopes. / Gai, Yan; Kotak, Vibhakar C.; Sanes, Dan H.; Rinzel, John.

In: Journal of Neurophysiology, Vol. 112, No. 4, 15.08.2014, p. 802-813.

Research output: Contribution to journalArticle

@article{fb27e127a9dc4bb1b0517d9d012025ca,
title = "On the localization of complex sounds: Temporal encoding based on input-slope coincidence detection of envelopes",
abstract = "Behavioral and neural findings demonstrate that animals can locate low-frequency sounds along the azimuth by detecting microsecond interaural time differences (ITDs). Information about ITDs is also available in the amplitude modulations (i.e., envelope) of high-frequency sounds. Since medial superior olivary (MSO) neurons encode low-frequency ITDs, we asked whether they employ a similar mechanism to process envelope ITDs with high-frequency carriers, and the effectiveness of this mechanism compared with the process of low-frequency sound. We developed a novel hybrid in vitro dynamic-clamp approach, which enabled us to mimic synaptic input to brain-slice neurons in response to virtual sound and to create conditions that cannot be achieved naturally but are useful for testing our hypotheses. For each simulated ear, a virtual sound, computer generated, was used as input to a computational auditory-nerve model. Model spike times were converted into synaptic input for MSO neurons, and ITD tuning curves were derived for several virtual-sound conditions: low-frequency pure tones, high-frequency tones modulated with two types of envelope, and speech sequences. Computational models were used to verify the physiological findings and explain the biophysical mechanism underlying the observed ITD coding. Both recordings and simulations indicate that MSO neurons are sensitive to ITDs carried by spectrotemporally complex virtual sounds, including speech tokens. Our findings strongly suggest that MSO neurons can encode ITDs across a broad-frequency spectrum using an input-slope-based coincidence-detection mechanism. Our data also provide an explanation at the cellular level for human localization performance involving high-frequency sound described by previous investigators.",
keywords = "Auditory processing, Kv1.1, Phasic firing, Sound localization",
author = "Yan Gai and Kotak, {Vibhakar C.} and Sanes, {Dan H.} and John Rinzel",
year = "2014",
month = "8",
day = "15",
doi = "10.1152/jn.00044.2013",
language = "English (US)",
volume = "112",
pages = "802--813",
journal = "Journal of Neurophysiology",
issn = "0022-3077",
publisher = "American Physiological Society",
number = "4",

}

TY - JOUR

T1 - On the localization of complex sounds

T2 - Temporal encoding based on input-slope coincidence detection of envelopes

AU - Gai, Yan

AU - Kotak, Vibhakar C.

AU - Sanes, Dan H.

AU - Rinzel, John

PY - 2014/8/15

Y1 - 2014/8/15

N2 - Behavioral and neural findings demonstrate that animals can locate low-frequency sounds along the azimuth by detecting microsecond interaural time differences (ITDs). Information about ITDs is also available in the amplitude modulations (i.e., envelope) of high-frequency sounds. Since medial superior olivary (MSO) neurons encode low-frequency ITDs, we asked whether they employ a similar mechanism to process envelope ITDs with high-frequency carriers, and the effectiveness of this mechanism compared with the process of low-frequency sound. We developed a novel hybrid in vitro dynamic-clamp approach, which enabled us to mimic synaptic input to brain-slice neurons in response to virtual sound and to create conditions that cannot be achieved naturally but are useful for testing our hypotheses. For each simulated ear, a virtual sound, computer generated, was used as input to a computational auditory-nerve model. Model spike times were converted into synaptic input for MSO neurons, and ITD tuning curves were derived for several virtual-sound conditions: low-frequency pure tones, high-frequency tones modulated with two types of envelope, and speech sequences. Computational models were used to verify the physiological findings and explain the biophysical mechanism underlying the observed ITD coding. Both recordings and simulations indicate that MSO neurons are sensitive to ITDs carried by spectrotemporally complex virtual sounds, including speech tokens. Our findings strongly suggest that MSO neurons can encode ITDs across a broad-frequency spectrum using an input-slope-based coincidence-detection mechanism. Our data also provide an explanation at the cellular level for human localization performance involving high-frequency sound described by previous investigators.

AB - Behavioral and neural findings demonstrate that animals can locate low-frequency sounds along the azimuth by detecting microsecond interaural time differences (ITDs). Information about ITDs is also available in the amplitude modulations (i.e., envelope) of high-frequency sounds. Since medial superior olivary (MSO) neurons encode low-frequency ITDs, we asked whether they employ a similar mechanism to process envelope ITDs with high-frequency carriers, and the effectiveness of this mechanism compared with the process of low-frequency sound. We developed a novel hybrid in vitro dynamic-clamp approach, which enabled us to mimic synaptic input to brain-slice neurons in response to virtual sound and to create conditions that cannot be achieved naturally but are useful for testing our hypotheses. For each simulated ear, a virtual sound, computer generated, was used as input to a computational auditory-nerve model. Model spike times were converted into synaptic input for MSO neurons, and ITD tuning curves were derived for several virtual-sound conditions: low-frequency pure tones, high-frequency tones modulated with two types of envelope, and speech sequences. Computational models were used to verify the physiological findings and explain the biophysical mechanism underlying the observed ITD coding. Both recordings and simulations indicate that MSO neurons are sensitive to ITDs carried by spectrotemporally complex virtual sounds, including speech tokens. Our findings strongly suggest that MSO neurons can encode ITDs across a broad-frequency spectrum using an input-slope-based coincidence-detection mechanism. Our data also provide an explanation at the cellular level for human localization performance involving high-frequency sound described by previous investigators.

KW - Auditory processing

KW - Kv1.1

KW - Phasic firing

KW - Sound localization

UR - http://www.scopus.com/inward/record.url?scp=84906082200&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84906082200&partnerID=8YFLogxK

U2 - 10.1152/jn.00044.2013

DO - 10.1152/jn.00044.2013

M3 - Article

C2 - 24848460

AN - SCOPUS:84906082200

VL - 112

SP - 802

EP - 813

JO - Journal of Neurophysiology

JF - Journal of Neurophysiology

SN - 0022-3077

IS - 4

ER -