Abstract
Item Response Theory (IRT) is commonly used to calibrate and equate the state assessment test programs. Equating is one of the crucial steps to generate score conversion table which is used for producing scale score and proficiency level for each individual student. Whether or not the equating is conducted properly considering its practical consequences? Whether or not the anchor set is appropriately applied for the use of equating? Are there any outlier anchor items to be removed from the anchor set? This presentation highlights some considerations to evaluate equating results, including IRT model fit, Test Characteristic Curves (TCCs), score distributions, the score and proficiency profile from the reference year, and property of the anchor set. Also, the consideration and alternative to remove outlier anchor items is discussed. Examples are illustrated as well to address the practical consequences of equating.
Original language | English |
---|---|
Publication status | Published - 2011 |
Event | The 76th Annual Meeting and 17th International Meeting of the Psychometric Society - The Hong Kong Institute of Education, Hong Kong, China Duration: 19 Jul 2011 → 22 Jul 2011 |
Conference
Conference | The 76th Annual Meeting and 17th International Meeting of the Psychometric Society |
---|---|
Abbreviated title | IMPS2011 |
Country/Territory | China |
City | Hong Kong |
Period | 19/07/11 → 22/07/11 |