This paper describes an English language listening test intended as computer-based testing material for secondary school students in Hong Kong, where considerable attention is being invested in online and computer-based testing. As well as providing a school-based testing facility, the study aims to contribute to the knowledge base regarding the efficacy and reliability of computer-based testing. The paper describes the construction of an item bank of over 400 short listening items calibrated on item response theory principles. Items from this bank were used to form a traditional paper-based listening test, and an adaptive computer-based test. Both forms of the test were administered to two Hong Kong Grade 11 and Grade 12 classes. Descriptive test statistics indicated that both test types discriminated effectively between school grades. In terms of comparability between test types, there was significant difference between the Grade 11 classes’ performance although not with that of Grade 12. Test takers generally performed better on the computer-based test than on the paper-based test, confirming earlier research. Interviews with test takers after taking both tests indicated an even split in terms of preference, with boys opting for the computer-based test and girls the paper-based test. Correlations between test takers’ performance on the two test types were high enough to indicate the computer-based test’s potential as a low-stakes test (its intended purpose as a school-based testing facility), although not as a high-stakes test (for example, as a territory-wide test replacing a traditional paper-based test). Copyright © 2006 Cambridge University Press.
CitationConiam, D. (2006). Evaluating computer-based and paper-based versions of an English-language listening test. ReCALL, 18(2), 193-211. doi: 10.1017/S0958344006000425
- English language
- Computer-based testing