This paper adopted the bifactor item response theory (IRT) approach to account for the wording effect in mixed-format scales, in which a general factor represents the latent construct that the test was designed to measure, and a nuisance factor indicates the wording effect. Two empirical examples from the PISA and the TIMSS were analyzed to compare the performance of the proposed approach and a standard IRT approach using WinBUGS. Results indicated moderate to large wording effects from NW items, and ignoring the wording effect not only resulted in overestimated test reliability but also dramatically changed person rankings.
|Publication status||Published - Apr 2014|
|Event||2014 Annual Meeting of American Educational Research Association: "The Power of Education Research for Innovation in Practice and Policy" - Philadelphia, PA, United States|
Duration: 03 Apr 2014 → 07 Apr 2014
|Conference||2014 Annual Meeting of American Educational Research Association: "The Power of Education Research for Innovation in Practice and Policy"|
|Abbreviated title||AERA 2014|
|Period||03/04/14 → 07/04/14|