Coder reliability and misclassification in the human coding of party manifestos

Slava Mikhaylov, Michael Laver, Kenneth R. Benoit

    Research output: Contribution to journalArticle

    Abstract

    The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP "gold standard" codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.

    Original languageEnglish (US)
    Pages (from-to)78-91
    Number of pages14
    JournalPolitical Analysis
    Volume20
    Issue number1
    DOIs
    StatePublished - 2012

    Fingerprint

    coding
    gold standard
    political science
    time series
    content analysis
    experiment

    ASJC Scopus subject areas

    • Sociology and Political Science

    Cite this

    Coder reliability and misclassification in the human coding of party manifestos. / Mikhaylov, Slava; Laver, Michael; Benoit, Kenneth R.

    In: Political Analysis, Vol. 20, No. 1, 2012, p. 78-91.

    Research output: Contribution to journalArticle

    Mikhaylov, Slava ; Laver, Michael ; Benoit, Kenneth R. / Coder reliability and misclassification in the human coding of party manifestos. In: Political Analysis. 2012 ; Vol. 20, No. 1. pp. 78-91.
    @article{257e0dc64f13425bada310ae14050407,
    title = "Coder reliability and misclassification in the human coding of party manifestos",
    abstract = "The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP {"}gold standard{"} codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.",
    author = "Slava Mikhaylov and Michael Laver and Benoit, {Kenneth R.}",
    year = "2012",
    doi = "10.1093/pan/mpr047",
    language = "English (US)",
    volume = "20",
    pages = "78--91",
    journal = "Political Analysis",
    issn = "1047-1987",
    publisher = "Oxford University Press",
    number = "1",

    }

    TY - JOUR

    T1 - Coder reliability and misclassification in the human coding of party manifestos

    AU - Mikhaylov, Slava

    AU - Laver, Michael

    AU - Benoit, Kenneth R.

    PY - 2012

    Y1 - 2012

    N2 - The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP "gold standard" codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.

    AB - The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP "gold standard" codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.

    UR - http://www.scopus.com/inward/record.url?scp=84856203738&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84856203738&partnerID=8YFLogxK

    U2 - 10.1093/pan/mpr047

    DO - 10.1093/pan/mpr047

    M3 - Article

    AN - SCOPUS:84856203738

    VL - 20

    SP - 78

    EP - 91

    JO - Political Analysis

    JF - Political Analysis

    SN - 1047-1987

    IS - 1

    ER -