Abstract
This study addresses several important issues in assessment of differential item functioning (DIF). It starts with the definition of DIF, effectiveness of using item fit statistics to detect DIF, and linear modeling of DIF in dichotomous items, polytomous items, facets, and testlet-based items. Because a common metric over groups of test-takers is a prerequisite in DIF assessment, this study reviews three such methods of establishing a common metric: the equal-mean-difficulty method, the all-other-item method, and the constant-item (CI) method. A small simulation demonstrates the superiority of the CI method over the others. As the CI method relies on a correct specification of DIF-free items to serve as anchors, a method of identifying such items is recommended and its effectiveness is illustrated through a simulation. Finally, this study discusses how to assess practical significance of DIF at both item and test levels. Copyright © 2008 Journal of Applied Measurement.
Original language | English |
---|---|
Pages (from-to) | 387-408 |
Journal | Journal of Applied Measurement |
Volume | 9 |
Issue number | 4 |
Publication status | Published - 2008 |
Citation
Wang, W. (2008). Assessment of differential item functioning. Journal of Applied Measurement, 9(4), 387-408.Keywords
- Statistics
- Simulation methods
- Analysis of variance
- Mathematical statistics
- Mathematics