The validity of examination essays in higher education: Issues and responses

Gavin Thomas Lumsden BROWN

Research output: Contribution to journalArticle

15 Citations (Scopus)

Abstract

The use of timed, essay examinations is a well-established means of evaluating student learning in higher education. The reliability of essay scoring is highly problematic and it appears that essay examination grades are highly dependent on language and organisational components of writing. Computer-assisted scoring of essays makes use of language features and has demonstrated strong similarity to human ratings. Studies of examiner behaviour show that attention to content and language features contributes to grading decisions. However, given the time constraints on essay examinations, an overemphasis on language aspects may weaken the validity of essay examination grades. This article suggests alternative approaches to the standard essay prompt which should raise the validity of essay tasks and scoring in higher education. Suggested options include redesigning tasks so that organisational and language features are less influential in scoring and the use of content maps. Copyright © 2010 The Author. Journal compilation © 2010 Blackwell Publishing Ltd.
Original languageEnglish
Pages (from-to)276-291
JournalHigher Education Quarterly
Volume64
Issue number3
DOIs
Publication statusPublished - Jul 2010

Fingerprint

examination
education
language
examiner
grading
rating
learning
student

Bibliographical note

Brown, G. T. L. (2010). The validity of examination essays in higher education: Issues and responses. Higher Education Quarterly, 64(3), 276-291.