Facilitating self-directed language learning in real-life scene description tasks with automated evaluation

Ruibin ZHAO, Yipeng ZHUANG, Zhiwei XIE, Leung Ho Philip YU

Research output: Contribution to journalArticlespeer-review

Abstract

Engaging children in describing real-life scenes provides an effective approach to fostering language production and developing their language skills, enabling them to establish meaningful connections between their language proficiency and authentic contexts. However, for such learning tasks, there has been a lack of research focusing on promoting self-directed language learning by using artificial intelligence techniques, primarily due to the challenges of handling multimodal information involved in such tasks. To address this gap, this study introduced a two-stage automated evaluation method that employed emerging cross-modal matching AI techniques. Firstly, an automated scoring model was developed to evaluate the quality of students' responses to scene description tasks. Compared with manually assigned human scores, our model scored students' descriptions accurately, as evidenced by a small testing mean absolute error of 0.3969 for a total score of 10 points. Based on the scoring results, immediate feedback was then provided to students by generating targeted comments and suggestions. The goal of this feedback was to assist students in progressively improving their descriptions of daily-life scenes, thereby enabling them to practice their language skills independently. To assess the effectiveness of the feedback, a comprehensive investigation was conducted involving 157 students from middle schools in China, and both qualitative and quantitative experimental data were collected from the students. It is found that the quality of students' descriptions was improved significantly with the assistance of immediate feedback. On average, students achieved an increase of 1.48 points in their scores after making revisions based on the feedback. In addition, students reported positive learning experiences and expressed favorable opinions regarding the language learning tasks with the automated evaluation. The findings of this study have significant implications for future research and educational practice. They not only highlighted the potential of emerging cross-modal matching AI techniques in automatically evaluating learning tasks involving multimodal data but also suggested that providing immediate targeted feedback based on automated scoring results can effectively promote students’ self-directed language learning. Copyright © 2024 Elsevier Ltd.

Original languageEnglish
Article number105106
JournalComputers and Education
Volume219
Early online dateJun 2024
DOIs
Publication statusPublished - 2024

Citation

Zhao, R., Zhuang, Y., Xie, Z., & Yu, P. L. H. (2024). Facilitating self-directed language learning in real-life scene description tasks with automated evaluation. Computers and Education, 219, Article 105106. https://doi.org/10.1016/j.compedu.2024.105106

Keywords

  • PG student publication

Fingerprint

Dive into the research topics of 'Facilitating self-directed language learning in real-life scene description tasks with automated evaluation'. Together they form a unique fingerprint.