Phoneme discrimination from MEG data

Tuomas Lukka, Bernd Schoner, Alec Marantz

Research output: Contribution to journalArticle

Abstract

We treat magnetoencephalographic (MEG) data in a signal detection framework to discriminate between different phonemes heard by a test subject. Our data set consists of responses evoked by the voiced syllables /bae and /dae/ and the corresponding voiceless syllables /pae/ and /tae/. The data yield well to principal component analysis (PCA), with a reasonable subspace in the order of three components out of 37 channels. To discriminate between responses to the voiced and voiceless versions of a consonant we form a feature vector by either matched filtering or wavelet packet decomposition and use a mixture-of-experts model to classify the stimuli. Both choices of a feature vector lead to a significant detection accuracy. Furthermore, we show how to estimate the onset time of a stimulus from a continuous data stream. (C) 2000 Elsevier Science B.V.

Original languageEnglish (US)
Pages (from-to)153-165
Number of pages13
JournalNeurocomputing
Volume31
Issue number1-4
DOIs
StatePublished - Mar 2000

Fingerprint

Principal Component Analysis
Signal detection
Principal component analysis
Decomposition
Datasets

Keywords

  • MEG data
  • Phoneme discrimination
  • Signal detection

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cellular and Molecular Neuroscience

Cite this

Phoneme discrimination from MEG data. / Lukka, Tuomas; Schoner, Bernd; Marantz, Alec.

In: Neurocomputing, Vol. 31, No. 1-4, 03.2000, p. 153-165.

Research output: Contribution to journalArticle

Lukka, Tuomas ; Schoner, Bernd ; Marantz, Alec. / Phoneme discrimination from MEG data. In: Neurocomputing. 2000 ; Vol. 31, No. 1-4. pp. 153-165.
@article{38e25c35ceba425bb617e0afba349d98,
title = "Phoneme discrimination from MEG data",
abstract = "We treat magnetoencephalographic (MEG) data in a signal detection framework to discriminate between different phonemes heard by a test subject. Our data set consists of responses evoked by the voiced syllables /bae and /dae/ and the corresponding voiceless syllables /pae/ and /tae/. The data yield well to principal component analysis (PCA), with a reasonable subspace in the order of three components out of 37 channels. To discriminate between responses to the voiced and voiceless versions of a consonant we form a feature vector by either matched filtering or wavelet packet decomposition and use a mixture-of-experts model to classify the stimuli. Both choices of a feature vector lead to a significant detection accuracy. Furthermore, we show how to estimate the onset time of a stimulus from a continuous data stream. (C) 2000 Elsevier Science B.V.",
keywords = "MEG data, Phoneme discrimination, Signal detection",
author = "Tuomas Lukka and Bernd Schoner and Alec Marantz",
year = "2000",
month = "3",
doi = "10.1016/S0925-2312(99)00178-2",
language = "English (US)",
volume = "31",
pages = "153--165",
journal = "Neurocomputing",
issn = "0925-2312",
publisher = "Elsevier",
number = "1-4",

}

TY - JOUR

T1 - Phoneme discrimination from MEG data

AU - Lukka, Tuomas

AU - Schoner, Bernd

AU - Marantz, Alec

PY - 2000/3

Y1 - 2000/3

N2 - We treat magnetoencephalographic (MEG) data in a signal detection framework to discriminate between different phonemes heard by a test subject. Our data set consists of responses evoked by the voiced syllables /bae and /dae/ and the corresponding voiceless syllables /pae/ and /tae/. The data yield well to principal component analysis (PCA), with a reasonable subspace in the order of three components out of 37 channels. To discriminate between responses to the voiced and voiceless versions of a consonant we form a feature vector by either matched filtering or wavelet packet decomposition and use a mixture-of-experts model to classify the stimuli. Both choices of a feature vector lead to a significant detection accuracy. Furthermore, we show how to estimate the onset time of a stimulus from a continuous data stream. (C) 2000 Elsevier Science B.V.

AB - We treat magnetoencephalographic (MEG) data in a signal detection framework to discriminate between different phonemes heard by a test subject. Our data set consists of responses evoked by the voiced syllables /bae and /dae/ and the corresponding voiceless syllables /pae/ and /tae/. The data yield well to principal component analysis (PCA), with a reasonable subspace in the order of three components out of 37 channels. To discriminate between responses to the voiced and voiceless versions of a consonant we form a feature vector by either matched filtering or wavelet packet decomposition and use a mixture-of-experts model to classify the stimuli. Both choices of a feature vector lead to a significant detection accuracy. Furthermore, we show how to estimate the onset time of a stimulus from a continuous data stream. (C) 2000 Elsevier Science B.V.

KW - MEG data

KW - Phoneme discrimination

KW - Signal detection

UR - http://www.scopus.com/inward/record.url?scp=0034059369&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034059369&partnerID=8YFLogxK

U2 - 10.1016/S0925-2312(99)00178-2

DO - 10.1016/S0925-2312(99)00178-2

M3 - Article

VL - 31

SP - 153

EP - 165

JO - Neurocomputing

JF - Neurocomputing

SN - 0925-2312

IS - 1-4

ER -