The identification of affective-prosodic stimuli by left- and right- hemisphere-damaged subjects

All errors are not created equal

D. Van Lancker, J. J. Sidtis

Research output: Contribution to journalArticle

Abstract

Impairments in listening tasks that require subjects to match affective- prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere- damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.

Original languageEnglish (US)
Pages (from-to)963-970
Number of pages8
JournalJournal of Speech and Hearing Research
Volume35
Issue number5
StatePublished - 1992

Fingerprint

Acoustics
acoustics
Cues
stimulus
Facial Expression
facial expression
Group
Cerebrum
Discriminant Analysis
brain
damages
Language
Control Groups
Brain
language
performance
Affective
Left Hemisphere
Acoustic Cues
Right Hemisphere

ASJC Scopus subject areas

  • Otorhinolaryngology

Cite this

The identification of affective-prosodic stimuli by left- and right- hemisphere-damaged subjects : All errors are not created equal. / Van Lancker, D.; Sidtis, J. J.

In: Journal of Speech and Hearing Research, Vol. 35, No. 5, 1992, p. 963-970.

Research output: Contribution to journalArticle

@article{b5b5c524ef8c4964a2af1a4298d00ded,
title = "The identification of affective-prosodic stimuli by left- and right- hemisphere-damaged subjects: All errors are not created equal",
abstract = "Impairments in listening tasks that require subjects to match affective- prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere- damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.",
author = "{Van Lancker}, D. and Sidtis, {J. J.}",
year = "1992",
language = "English (US)",
volume = "35",
pages = "963--970",
journal = "Journal of Speech, Language, and Hearing Research",
issn = "1092-4388",
publisher = "American Speech-Language-Hearing Association (ASHA)",
number = "5",

}

TY - JOUR

T1 - The identification of affective-prosodic stimuli by left- and right- hemisphere-damaged subjects

T2 - All errors are not created equal

AU - Van Lancker, D.

AU - Sidtis, J. J.

PY - 1992

Y1 - 1992

N2 - Impairments in listening tasks that require subjects to match affective- prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere- damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.

AB - Impairments in listening tasks that require subjects to match affective- prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere- damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.

UR - http://www.scopus.com/inward/record.url?scp=0026592167&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026592167&partnerID=8YFLogxK

M3 - Article

VL - 35

SP - 963

EP - 970

JO - Journal of Speech, Language, and Hearing Research

JF - Journal of Speech, Language, and Hearing Research

SN - 1092-4388

IS - 5

ER -