A new item response theory model for rater centrality using a hierarchical rater model approach

Xue-Lan QIU, Ming Ming CHIU, Wen Chung WANG, Po-Hsi CHEN

Research output: Contribution to journalArticlespeer-review

Abstract

Rater centrality, in which raters overuse middle scores for rating, is a common rater error which can affect test scores and subsequent decisions. Past studies on rater errors have focused on rater severity and inconsistency, neglecting rater centrality. This study proposes a new model within the hierarchical rater model framework to explicitly specify and directly estimate rater centrality in addition to rater severity and inconsistency. Simulations were conducted using the freeware JAGS to evaluate the parameter recovery of the new model and the consequences of ignoring rater centrality. The results revealed that the model had good parameter recovery with small bias, low root mean square errors, and high test score reliability, especially when a fully crossed linking design was used. Ignoring centrality yielded poor item difficulty estimates, person ability estimates, rater errors estimates, and underestimated reliability. We also showcase how the new model can be used, using an empirical example involving English essays in the Advanced Placement exam. Copyright © 2021 The Psychonomic Society, Inc.
Original languageEnglish
JournalBehavior Research Methods
Early online date01 Nov 2021
DOIs
Publication statusE-pub ahead of print - 01 Nov 2021

Citation

Qiu, X.-L., Chiu, M. M., Wang, W.-C., & Chen, P.-H. (2021). A new item response theory model for rater centrality using a hierarchical rater model approach. Behavior Research Methods. Advance online publication. doi: 10.3758/s13428-021-01699-y

Keywords

  • Rater errors
  • Centrality effect
  • Hierarchical rater model
  • Item response theory

Fingerprint

Dive into the research topics of 'A new item response theory model for rater centrality using a hierarchical rater model approach'. Together they form a unique fingerprint.