Abstract
Massive open online courses (MOOCs) offer rich opportunities to comprehend learners’ learning experiences by examining their self-generated course evaluation content. This study investigated the effectiveness of fine-tuned BERT models for the automated classification of topics in online course reviews and explored the variations of these topics across different disciplines and course rating groups. Based on 364,660 course review sentences across 13 disciplines from Class Central, 10 topic categories were identified automatically by a BERT-BiLSTM-Attention model, highlighting the potential of fine-tuned BERTs in analysing large-scale MOOC reviews. Topic distribution analyses across disciplines showed that learners in technical fields were engaged with assessment-related issues. Significant differences in topic frequencies between high-and low-star rating courses indicated the critical role of course quality and instructor support in shaping learner satisfaction. This study also provided implications for improving learner satisfaction through interventions in course design and implementation to monitor learners’ evolving needs effectively. Copyright © 2025 Athabasca University. All rights reserved.
Original language | English |
---|---|
Pages (from-to) | 57-79 |
Journal | International Review of Research in Open and Distributed Learning |
Volume | 26 |
Issue number | 1 |
Publication status | Published - 2025 |
Citation
Chen, X., Zou, D., Xie, H., Cheng, G., Li, Z., & Wang, F. L. (2025). Automatic classification of online learner reviews via fine-tuned BERTs. International Review of Research in Open and Distributed Learning, 26(1), 57-79. https://www.irrodl.org/index.php/irrodl/article/view/8068Keywords
- Learner-generated content
- Automatic classification
- Fine-tuned
- BERTs
- Course evaluation