IDoFew: Intermediate training using dual-clustering in language models for few labels text classification

Abdullah ALSUHAIBANI, Hamad ZOGAN, Imran RAZZAK, Shoaib JAMEEL, Guandong XU

Research output: Chapter in Book/Report/Conference proceedingChapters

2 Citations (Scopus)

Abstract

Language models such as Bidirectional Encoder Representations from Transformers (BERT) have been very effective in various Natural Language Processing (NLP) and text mining tasks including text classification. However, some tasks still pose challenges for these models, including text classification with limited labels. This can result in a cold-start problem. Although some approaches have attempted to address this problem through single-stage clustering as an intermediate training step coupled with a pre-Trained language model, which generates pseudo-labels to improve classification, these methods are often error-prone due to the limitations of the clustering algorithms. To overcome this, we have developed a novel two-stage intermediate clustering with subsequent fine-Tuning that models the pseudo-labels reliably, resulting in reduced prediction errors. The key novelty in our model, IDoFew, is that the two-stage clustering coupled with two different clustering algorithms helps exploit the advantages of the complementary algorithms that reduce the errors in generating reliable pseudo-labels for fine-Tuning. Our approach has shown significant improvements compared to strong comparative models. Copyright © 2024 held by the owner/author(s).

Original languageEnglish
Title of host publicationProceedings of the 17th ACM International Conference on Web Search and Data Mining, WSDM 2024
Place of PublicationNew York, United States
PublisherAssociation for Computing Machinery
Pages18-27
ISBN (Electronic)9798400703713
DOIs
Publication statusPublished - 2024

Citation

Alsuhaibani, A., Zogan, H., Razzak, I., Jameel, S., & Xu, G. (2024). IDoFew: Intermediate training using dual-clustering in language models for few labels text classification. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, WSDM 2024 (pp. 18-27). Association for Computing Machinery. https://doi.org/10.1145/3616855.3635849

Keywords

  • Text classification
  • Cluster
  • Few labels
  • Limited labels
  • Pre-trained models

Fingerprint

Dive into the research topics of 'IDoFew: Intermediate training using dual-clustering in language models for few labels text classification'. Together they form a unique fingerprint.