Abstract
This article describes the steps we went through in designing and validating an item bank to diagnose linguistic problems in the English academic writing of university students in Hong Kong. Test items adopt traditional item formats (e.g., MCQ, grammatical judgment tasks, and error correction) but are based on authentic language materials extracted from a manually error-tagged corpus of target students' essays. A total of 257 items were developed to assess 25 high-frequency and grave linguistic errors. To validate test items and calibrate their psychometric qualities, four parallel tests were assembled and given to 338 students. Rasch modeling was conducted to examine item dimensionality, DIF sizes, fit indices, reliability, and difficulty. The results supported the validity and reliability of the remaining 219 items in the bank. Moreover, we investigated the effects of item formats and target errors on item difficulty and item-measure correlations and the relations amongst error difficulty error frequency and prevalence. The item bank approach to developing diagnostic tests was found useful insofar as it provided more precise information about knowledge gaps than the corpus-based textual analysis and had the potential to pinpoint high priority areas for remedial instruction. Copyright © 2019 Taylor & Francis.
Original language | English |
---|---|
Pages (from-to) | 183-203 |
Journal | Language Assessment Quarterly |
Volume | 17 |
Issue number | 2 |
Early online date | 10 Dec 2019 |
DOIs | |
Publication status | Published - 2020 |