Population rate-coding predicts correctly that human sound localization depends on sound intensity

Antje Ihlefeld, Nima Alamatsaz, Robert M. Shapley

Research output: Contribution to journalArticle

Abstract

Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.

Original languageEnglish (US)
JournaleLife
Volume8
DOIs
StatePublished - Oct 21 2019

Fingerprint

Sound Localization
Acoustic intensity
Acoustic waves
Population
Depth Perception
Visual Perception
Brain
Hearing
Depth perception
Audition
Direction compound

Keywords

  • hearing
  • human
  • interaural time difference
  • Jeffress model
  • neural coding
  • neuroscience
  • psychometrics
  • sound localization

ASJC Scopus subject areas

  • Neuroscience(all)
  • Immunology and Microbiology(all)
  • Biochemistry, Genetics and Molecular Biology(all)

Cite this

Population rate-coding predicts correctly that human sound localization depends on sound intensity. / Ihlefeld, Antje; Alamatsaz, Nima; Shapley, Robert M.

In: eLife, Vol. 8, 21.10.2019.

Research output: Contribution to journalArticle

@article{fe6b28d27e8b4f96bf9f5f1005568db6,
title = "Population rate-coding predicts correctly that human sound localization depends on sound intensity",
abstract = "Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.",
keywords = "hearing, human, interaural time difference, Jeffress model, neural coding, neuroscience, psychometrics, sound localization",
author = "Antje Ihlefeld and Nima Alamatsaz and Shapley, {Robert M.}",
year = "2019",
month = "10",
day = "21",
doi = "10.7554/eLife.47027",
language = "English (US)",
volume = "8",
journal = "eLife",
issn = "2050-084X",
publisher = "eLife Sciences Publications",

}

TY - JOUR

T1 - Population rate-coding predicts correctly that human sound localization depends on sound intensity

AU - Ihlefeld, Antje

AU - Alamatsaz, Nima

AU - Shapley, Robert M.

PY - 2019/10/21

Y1 - 2019/10/21

N2 - Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.

AB - Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.

KW - hearing

KW - human

KW - interaural time difference

KW - Jeffress model

KW - neural coding

KW - neuroscience

KW - psychometrics

KW - sound localization

UR - http://www.scopus.com/inward/record.url?scp=85073590515&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85073590515&partnerID=8YFLogxK

U2 - 10.7554/eLife.47027

DO - 10.7554/eLife.47027

M3 - Article

C2 - 31633481

AN - SCOPUS:85073590515

VL - 8

JO - eLife

JF - eLife

SN - 2050-084X

ER -