Item Response Models for Multiple Attempts With Incomplete Data

Yoav Bergner, Ikkyu Choi, Katherine E. Castellano

Research output: Contribution to journalArticle

Abstract

Allowance for multiple chances to answer constructed response questions is a prevalent feature in computer-based homework and exams. We consider the use of item response theory in the estimation of item characteristics and student ability when multiple attempts are allowed but no explicit penalty is deducted for extra tries. This is common practice in online formative assessments, where the number of attempts is often unlimited. In these environments, some students may not always answer-until-correct, but may rather terminate a response process after one or more incorrect tries. We contrast the cases of graded and sequential item response models, both unidimensional models which do not explicitly account for factors other than ability. These approaches differ not only in terms of log-odds assumptions but, importantly, in terms of handling incomplete data. We explore the consequences of model misspecification through a simulation study and with four online homework data sets. Our results suggest that model selection is insensitive for complete data, but quite sensitive to whether missing responses are regarded as informative (of inability) or not (e.g., missing at random). Under realistic conditions, a sequential model with similar parametric degrees of freedom to a graded model can account for more response patterns and outperforms the latter in terms of model fit.

Original languageEnglish (US)
Pages (from-to)415-436
Number of pages22
JournalJournal of Educational Measurement
Volume56
Issue number2
DOIs
StatePublished - Jun 1 2019

Fingerprint

Students
homework
ability
penalty
student
simulation
Datasets

ASJC Scopus subject areas

  • Education
  • Developmental and Educational Psychology
  • Applied Psychology
  • Psychology (miscellaneous)

Cite this

Item Response Models for Multiple Attempts With Incomplete Data. / Bergner, Yoav; Choi, Ikkyu; Castellano, Katherine E.

In: Journal of Educational Measurement, Vol. 56, No. 2, 01.06.2019, p. 415-436.

Research output: Contribution to journalArticle

Bergner, Yoav ; Choi, Ikkyu ; Castellano, Katherine E. / Item Response Models for Multiple Attempts With Incomplete Data. In: Journal of Educational Measurement. 2019 ; Vol. 56, No. 2. pp. 415-436.
@article{971d46febe7347208adf980fb1b718c7,
title = "Item Response Models for Multiple Attempts With Incomplete Data",
abstract = "Allowance for multiple chances to answer constructed response questions is a prevalent feature in computer-based homework and exams. We consider the use of item response theory in the estimation of item characteristics and student ability when multiple attempts are allowed but no explicit penalty is deducted for extra tries. This is common practice in online formative assessments, where the number of attempts is often unlimited. In these environments, some students may not always answer-until-correct, but may rather terminate a response process after one or more incorrect tries. We contrast the cases of graded and sequential item response models, both unidimensional models which do not explicitly account for factors other than ability. These approaches differ not only in terms of log-odds assumptions but, importantly, in terms of handling incomplete data. We explore the consequences of model misspecification through a simulation study and with four online homework data sets. Our results suggest that model selection is insensitive for complete data, but quite sensitive to whether missing responses are regarded as informative (of inability) or not (e.g., missing at random). Under realistic conditions, a sequential model with similar parametric degrees of freedom to a graded model can account for more response patterns and outperforms the latter in terms of model fit.",
author = "Yoav Bergner and Ikkyu Choi and Castellano, {Katherine E.}",
year = "2019",
month = "6",
day = "1",
doi = "10.1111/jedm.12214",
language = "English (US)",
volume = "56",
pages = "415--436",
journal = "Journal of Educational Measurement",
issn = "0022-0655",
publisher = "Wiley-Blackwell",
number = "2",

}

TY - JOUR

T1 - Item Response Models for Multiple Attempts With Incomplete Data

AU - Bergner, Yoav

AU - Choi, Ikkyu

AU - Castellano, Katherine E.

PY - 2019/6/1

Y1 - 2019/6/1

N2 - Allowance for multiple chances to answer constructed response questions is a prevalent feature in computer-based homework and exams. We consider the use of item response theory in the estimation of item characteristics and student ability when multiple attempts are allowed but no explicit penalty is deducted for extra tries. This is common practice in online formative assessments, where the number of attempts is often unlimited. In these environments, some students may not always answer-until-correct, but may rather terminate a response process after one or more incorrect tries. We contrast the cases of graded and sequential item response models, both unidimensional models which do not explicitly account for factors other than ability. These approaches differ not only in terms of log-odds assumptions but, importantly, in terms of handling incomplete data. We explore the consequences of model misspecification through a simulation study and with four online homework data sets. Our results suggest that model selection is insensitive for complete data, but quite sensitive to whether missing responses are regarded as informative (of inability) or not (e.g., missing at random). Under realistic conditions, a sequential model with similar parametric degrees of freedom to a graded model can account for more response patterns and outperforms the latter in terms of model fit.

AB - Allowance for multiple chances to answer constructed response questions is a prevalent feature in computer-based homework and exams. We consider the use of item response theory in the estimation of item characteristics and student ability when multiple attempts are allowed but no explicit penalty is deducted for extra tries. This is common practice in online formative assessments, where the number of attempts is often unlimited. In these environments, some students may not always answer-until-correct, but may rather terminate a response process after one or more incorrect tries. We contrast the cases of graded and sequential item response models, both unidimensional models which do not explicitly account for factors other than ability. These approaches differ not only in terms of log-odds assumptions but, importantly, in terms of handling incomplete data. We explore the consequences of model misspecification through a simulation study and with four online homework data sets. Our results suggest that model selection is insensitive for complete data, but quite sensitive to whether missing responses are regarded as informative (of inability) or not (e.g., missing at random). Under realistic conditions, a sequential model with similar parametric degrees of freedom to a graded model can account for more response patterns and outperforms the latter in terms of model fit.

UR - http://www.scopus.com/inward/record.url?scp=85066803646&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85066803646&partnerID=8YFLogxK

U2 - 10.1111/jedm.12214

DO - 10.1111/jedm.12214

M3 - Article

AN - SCOPUS:85066803646

VL - 56

SP - 415

EP - 436

JO - Journal of Educational Measurement

JF - Journal of Educational Measurement

SN - 0022-0655

IS - 2

ER -