Test data may be very complicated such that standard IRT models become inefficient. For example, items in the same testlet may be locally dependent, after controlling for the latent trait of interest. Two major strategies have been adopted to consider local dependence among items. In the “fixed-effect” strategy, items that might show local dependence are reorganized as an “item bundle” and a set of fixed-effect item parameters are then used to describe the relationship among all possible response patterns in the item bundle. The fixed-effect strategy, although very comprehensive, becomes difficult to manage when the number of items to form a bundle or the number of item categories is large. The random-effect strategy is an alternative, in which a set of random-effect parameters (latent variables) are added to standard IRT models. Testlet response theory models adopt this strategy to account for local dependence among items within a testlet. It is hoped through the inclusion of additional latent variables, items will become locally independent. The advantage of the random-effect strategy is that the usual parameters attached to individual items (e.g., the a-, b, and c-parameters) are attainable, with the potential cost of computational burden due to high dimensionality. In this presentation, I will introduce the random-effect strategy using testlets as a template, and then apply this strategy to tackle the following testing issues: (a) positively and negatively worded items in the same inventory, (b) subjective judgment on the category labels of rating scale items across respondents, (c) intra-and inter-rater variations in severity, (d) local dependence among repeated ratings due to interaction among raters prior to giving ratings, and (e) nonignorable choice effect of examinee-selected items.
|Publication status||Published - 2012|