Applying the cognitive machine translation evaluation approach to Arabic

Irina Temnikova, Wajdi Zaghouani, Stephan Vogel, Nizar Habash

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    The goal of the cognitive machine translation (MT) evaluation approach is to build classifiers which assign post-editing effort scores to new texts. The approach helps estimate fair compensation for post-editors in the translation industry by evaluating the cognitive difficulty of post-editing MT output. The approach counts the number of errors classified in different categories on the basis of how much cognitive effort they require in order to be corrected. In this paper, we present the results of applying an existing cognitive evaluation approach to Modern Standard Arabic (MSA). We provide a comparison of the number of errors and categories of errors in three MSA texts of different MT quality (without any language-specific adaptation), as well as a comparison between MSA texts and texts from three Indo-European languages (Russian, Spanish, and Bulgarian), taken from a previous experiment. The results show how the error distributions change passing from the MSA texts of worse MT quality to MSA texts of better MT quality, as well as a similarity in distinguishing the texts of better MT quality for all four languages.

    Original languageEnglish (US)
    Title of host publicationProceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016
    PublisherEuropean Language Resources Association (ELRA)
    Pages3644-3651
    Number of pages8
    ISBN (Electronic)9782951740891
    StatePublished - Jan 1 2016
    Event10th International Conference on Language Resources and Evaluation, LREC 2016 - Portoroz, Slovenia
    Duration: May 23 2016May 28 2016

    Other

    Other10th International Conference on Language Resources and Evaluation, LREC 2016
    CountrySlovenia
    CityPortoroz
    Period5/23/165/28/16

    Fingerprint

    evaluation
    Russian language
    language
    Evaluation
    Machine Translation
    editor
    industry
    experiment
    Editing
    Language

    Keywords

    • Arabic
    • Machine translation evaluation
    • Post-editing

    ASJC Scopus subject areas

    • Linguistics and Language
    • Library and Information Sciences
    • Language and Linguistics
    • Education

    Cite this

    Temnikova, I., Zaghouani, W., Vogel, S., & Habash, N. (2016). Applying the cognitive machine translation evaluation approach to Arabic. In Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016 (pp. 3644-3651). European Language Resources Association (ELRA).

    Applying the cognitive machine translation evaluation approach to Arabic. / Temnikova, Irina; Zaghouani, Wajdi; Vogel, Stephan; Habash, Nizar.

    Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016. European Language Resources Association (ELRA), 2016. p. 3644-3651.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Temnikova, I, Zaghouani, W, Vogel, S & Habash, N 2016, Applying the cognitive machine translation evaluation approach to Arabic. in Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016. European Language Resources Association (ELRA), pp. 3644-3651, 10th International Conference on Language Resources and Evaluation, LREC 2016, Portoroz, Slovenia, 5/23/16.
    Temnikova I, Zaghouani W, Vogel S, Habash N. Applying the cognitive machine translation evaluation approach to Arabic. In Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016. European Language Resources Association (ELRA). 2016. p. 3644-3651
    Temnikova, Irina ; Zaghouani, Wajdi ; Vogel, Stephan ; Habash, Nizar. / Applying the cognitive machine translation evaluation approach to Arabic. Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016. European Language Resources Association (ELRA), 2016. pp. 3644-3651
    @inproceedings{54c93c2172654f90bc9650af66b6da0a,
    title = "Applying the cognitive machine translation evaluation approach to Arabic",
    abstract = "The goal of the cognitive machine translation (MT) evaluation approach is to build classifiers which assign post-editing effort scores to new texts. The approach helps estimate fair compensation for post-editors in the translation industry by evaluating the cognitive difficulty of post-editing MT output. The approach counts the number of errors classified in different categories on the basis of how much cognitive effort they require in order to be corrected. In this paper, we present the results of applying an existing cognitive evaluation approach to Modern Standard Arabic (MSA). We provide a comparison of the number of errors and categories of errors in three MSA texts of different MT quality (without any language-specific adaptation), as well as a comparison between MSA texts and texts from three Indo-European languages (Russian, Spanish, and Bulgarian), taken from a previous experiment. The results show how the error distributions change passing from the MSA texts of worse MT quality to MSA texts of better MT quality, as well as a similarity in distinguishing the texts of better MT quality for all four languages.",
    keywords = "Arabic, Machine translation evaluation, Post-editing",
    author = "Irina Temnikova and Wajdi Zaghouani and Stephan Vogel and Nizar Habash",
    year = "2016",
    month = "1",
    day = "1",
    language = "English (US)",
    pages = "3644--3651",
    booktitle = "Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016",
    publisher = "European Language Resources Association (ELRA)",

    }

    TY - GEN

    T1 - Applying the cognitive machine translation evaluation approach to Arabic

    AU - Temnikova, Irina

    AU - Zaghouani, Wajdi

    AU - Vogel, Stephan

    AU - Habash, Nizar

    PY - 2016/1/1

    Y1 - 2016/1/1

    N2 - The goal of the cognitive machine translation (MT) evaluation approach is to build classifiers which assign post-editing effort scores to new texts. The approach helps estimate fair compensation for post-editors in the translation industry by evaluating the cognitive difficulty of post-editing MT output. The approach counts the number of errors classified in different categories on the basis of how much cognitive effort they require in order to be corrected. In this paper, we present the results of applying an existing cognitive evaluation approach to Modern Standard Arabic (MSA). We provide a comparison of the number of errors and categories of errors in three MSA texts of different MT quality (without any language-specific adaptation), as well as a comparison between MSA texts and texts from three Indo-European languages (Russian, Spanish, and Bulgarian), taken from a previous experiment. The results show how the error distributions change passing from the MSA texts of worse MT quality to MSA texts of better MT quality, as well as a similarity in distinguishing the texts of better MT quality for all four languages.

    AB - The goal of the cognitive machine translation (MT) evaluation approach is to build classifiers which assign post-editing effort scores to new texts. The approach helps estimate fair compensation for post-editors in the translation industry by evaluating the cognitive difficulty of post-editing MT output. The approach counts the number of errors classified in different categories on the basis of how much cognitive effort they require in order to be corrected. In this paper, we present the results of applying an existing cognitive evaluation approach to Modern Standard Arabic (MSA). We provide a comparison of the number of errors and categories of errors in three MSA texts of different MT quality (without any language-specific adaptation), as well as a comparison between MSA texts and texts from three Indo-European languages (Russian, Spanish, and Bulgarian), taken from a previous experiment. The results show how the error distributions change passing from the MSA texts of worse MT quality to MSA texts of better MT quality, as well as a similarity in distinguishing the texts of better MT quality for all four languages.

    KW - Arabic

    KW - Machine translation evaluation

    KW - Post-editing

    UR - http://www.scopus.com/inward/record.url?scp=85037125069&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85037125069&partnerID=8YFLogxK

    M3 - Conference contribution

    SP - 3644

    EP - 3651

    BT - Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016

    PB - European Language Resources Association (ELRA)

    ER -