Empirical Research Background: In constructing a reading ability scale for English, it was necessary to take the constituent testlets, items grouped within a single reading passage, seriously. That posed interesting and challenging problems for the calibration process. Empirical Research Aims: This paper attempts to show how to use the testlets as the main units for the construction of the reading ability scale. The exercise highlighted issues relating to item level and testlet level invariance, which would be interesting and informative for Rasch model applications. Empirical Research Sample: The sample in the study is around 5000 primary and secondary students in Hong Kong. Empirical Research Method: The scale calibration uses a series of linked reading comprehension tests acorss the full spectrum of the Hong Kong primary and secondary education. A three-facet Rasch model was used with testlets as an intermediate levels between items and students. The main issue to be considered is the selection of linking items across levels, giving due consideration to possible variance between levels and to the need to maintain invariance across the scale. The achievement of the latter will be the use of the anchoring of the testlets, while allowing some linking items to vary within the context of the testlet. Empirical Research RASCH: Multi-faceted Rasch models were used in the scale calibration. The exercise raises interesting issues on the definition of what constitutes a facet versus item grouping. Empirical Research Results: The exercise enabled the construction of a reading ability scale for English in Hong Kong. Empirical Research Conclusions: The reading ability scale provides the basis for tracking student reading ability in English across the full spectrum of education in Hong Kong. It has already been used in Macau and will be used in Taiwan.
|Publication status||Published - Jul 2009|