Forced-choice (pairwise comparison) items have been widely used in personality and attitude tests, such as the Edwards Personal Preference Schedule and Gordon Personal Profile-Inventory. In these tests, respondents are requested to select one statement from a pair of statements, which makes the tests become ipsative (selfcomparison). The analysis of ipsative tests within the IRT framework has recently attached research attention. The Rasch model for ipsative forced-choice items is especially promising because of its good measurement properties (Wang & Chen, 2013). Ipsative tests often involve many latent traits and many items. Computerized adaptive testing (CAT) is thus especially useful for such tests. In this study, we developed CAT algorithms under this model in order to increase its feasibility. We developed several item selection procedures and conducted a series of simulations to evaluate their performance. The simulation results showed that proposed information stratified method and the progressive method outperformed the random selection method and the maximum information method, in terms of the absolute bias and the The 78th Annual Meeting of the Psychometric Society Friday July 26 96 correlation between the true and estimated ability. As expected, the random selection method had the best item exposure control.
|Publication status||Published - Jul 2013|