An empirical analysis of choice effects in examinee-selected items

Chen Wei LIU, Wen Chung WANG

Research output: Contribution to conferencePaper

Abstract

Background: In the examinee-selected (ES) design, respondents are required to respond to a fixed number of items from a set of given items (e.g., respond to 2 of 5 given items). The ES design may enhance students’ learning motivation and reduce students’ testing anxiety. However, these advantages come at a price: scores obtained from different selection combinations are not comparable. The resulting incomplete data might be missing not at random (MNAR) so that standard IRT models become inappropriate. Aims: We conducted an experiment with the "Choose one, Answer all” (COAA) design, in which students were instructed to preview the paired items, indicate their preference of the paired items, explain the reasons, and then answered both items. We fit a recently developed class of IRT models to the data to validate the new models. Methods: To account for choice effects in ES items, we (Liu & Wang, 2014; Wang & Liu, 2015) developed two classes of IRT models. In the first class of models, the choice effects were accounted for by adding a new latent variable to standard IRT model. The correlation between the latent variable and the intended-to-be-measured latent trait quantifies how stronger the choice effect and how serious the violation of the assumption of missing at random are. In the second class of models, those persons showing different selection patterns on ES items were allowed to have different means and variances on the latent variable. Sample: 513 junior students (aged approximately 14) participated in the experiment. The mathematic test consisted of two mandatory and seven pairs of multiple-choice items. The COAA design as adopted. Results: Because of the COAA design, the item and person parameters could be estimated from the complete data. When the new and traditional IRT models were fit to the incomplete data where unselected items were treated as missing, it was found that the new IRT models had a better model-data fit than traditional IRT models. A significantly positive correlation between the latent variable and the target latent trait was found, indicating a nonignorable choice effect. Additionally, different selection patterns had different distributions on the latent variable for choice effect. The new models yielded parameter estimates that were closer to those obtained from the complete data than traditional models. Conclusions: The new models were validated by the COAA design. The choice effect was positive and nonignoriable. Different selection patterns followed different distributions on the latent variable for choice effects. Future Directions: It is likely that students sampled from the same school are more homogeneous on the variable of interest (e.g., mathematical proficiency) than those sampled from different schools. In recent years, multilevel IRT models (Fox, 2005; Fox & Glas, 2001; Wang & Qiu, 2013) have been developed to account for such a multilevel data structure. It is of great interest to embed the new IRT models within the multilevel framework.
Original languageEnglish
Publication statusPublished - Aug 2015

    Fingerprint

Citation

Liu, C.-W., & Wang, W.-C. (2015, August). An empirical analysis of choice effects in examinee-selected items. Paper presented at the Pacific Rim Objective Measurement Symposium 2015, Kyushu Sangyo University, Fukuoka, Japan.