The multifaceted IRT Model for examinee-selected items

Wen Chung WANG, Xue Lan QIU

Research output: Contribution to conferencePapers


In addition to mandatory items that all examinees must answer, examinees may be requested to answer a fixed number of items from a given set of items (e.g., answering 2 among 5 items). These so-called Examinee-Selected (ES) items bring challenge to standard IRT models because unselected items may be missing not at random. A new class of IRT models have been proposed for ES items (Wang, Jin, Qiu, & Wang, 2012; Wang & Liu, 2015). These models have two facets: person and item. Often, ES items are in a constructed-response format and are marked by raters. Therefore, in addition to item difficulty and person ability, rater severity also plays a role. Moreover, a rater may not be able to hold a constant severity throughout the whole rating process. This study thus proposed a new and general multifaceted IRT model to account for rater effects in ES items. Preliminary simulation studies show that the parameters of this new model could be well recovered by the freeware JAGS. An experiment with the “Choose-one-answer-all” design (Wang, Wainer, & Thissen, 1995) was conducted to collect complete data in ES items. The new model was validated by the empirical data.
Original languageEnglish
Publication statusPublished - Jul 2015


Wang, W.-C., & Qiu, X.-L. (2015, July). The multifaceted IRT Model for examinee-selected items. Paper presented at the 2015 International Meeting of the Psychometric Society (IMPS), Beijing Normal University, Beijing, China.


Dive into the research topics of 'The multifaceted IRT Model for examinee-selected items'. Together they form a unique fingerprint.