In spoken word recognition, the future predicts the past

Laura Gwilliams, Tal Linzen, David Poeppel, Alec Marantz

Research output: Contribution to journalArticle

Abstract

Speech is an inherently noisy and ambiguous signal. To fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. Although many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supportingthe integrationof subsequent context remainunknown. UsingMEGtorecordfromhumanauditorycortex, we analyzed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter timescale of ~450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.

Original languageEnglish (US)
Pages (from-to)7585-7599
Number of pages15
JournalJournal of Neuroscience
Volume38
Issue number35
DOIs
StatePublished - Aug 29 2018

Fingerprint

Auditory Cortex
Speech Perception
Phonetics
Recognition (Psychology)

Keywords

  • Auditory processing
  • Lexical access
  • MEG
  • Speech

ASJC Scopus subject areas

  • Neuroscience(all)

Cite this

In spoken word recognition, the future predicts the past. / Gwilliams, Laura; Linzen, Tal; Poeppel, David; Marantz, Alec.

In: Journal of Neuroscience, Vol. 38, No. 35, 29.08.2018, p. 7585-7599.

Research output: Contribution to journalArticle

Gwilliams, Laura ; Linzen, Tal ; Poeppel, David ; Marantz, Alec. / In spoken word recognition, the future predicts the past. In: Journal of Neuroscience. 2018 ; Vol. 38, No. 35. pp. 7585-7599.
@article{0966fb858f044abfaf340d8043e25916,
title = "In spoken word recognition, the future predicts the past",
abstract = "Speech is an inherently noisy and ambiguous signal. To fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. Although many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supportingthe integrationof subsequent context remainunknown. UsingMEGtorecordfromhumanauditorycortex, we analyzed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter timescale of ~450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.",
keywords = "Auditory processing, Lexical access, MEG, Speech",
author = "Laura Gwilliams and Tal Linzen and David Poeppel and Alec Marantz",
year = "2018",
month = "8",
day = "29",
doi = "10.1523/JNEUROSCI.0065-18.2018",
language = "English (US)",
volume = "38",
pages = "7585--7599",
journal = "Journal of Neuroscience",
issn = "0270-6474",
publisher = "Society for Neuroscience",
number = "35",

}

TY - JOUR

T1 - In spoken word recognition, the future predicts the past

AU - Gwilliams, Laura

AU - Linzen, Tal

AU - Poeppel, David

AU - Marantz, Alec

PY - 2018/8/29

Y1 - 2018/8/29

N2 - Speech is an inherently noisy and ambiguous signal. To fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. Although many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supportingthe integrationof subsequent context remainunknown. UsingMEGtorecordfromhumanauditorycortex, we analyzed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter timescale of ~450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.

AB - Speech is an inherently noisy and ambiguous signal. To fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. Although many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supportingthe integrationof subsequent context remainunknown. UsingMEGtorecordfromhumanauditorycortex, we analyzed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter timescale of ~450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.

KW - Auditory processing

KW - Lexical access

KW - MEG

KW - Speech

UR - http://www.scopus.com/inward/record.url?scp=85052663259&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85052663259&partnerID=8YFLogxK

U2 - 10.1523/JNEUROSCI.0065-18.2018

DO - 10.1523/JNEUROSCI.0065-18.2018

M3 - Article

VL - 38

SP - 7585

EP - 7599

JO - Journal of Neuroscience

JF - Journal of Neuroscience

SN - 0270-6474

IS - 35

ER -