Automated quality evaluation of large-scale benchmark datasets for vision-language tasks

Ruibin ZHAO, Zhiwei XIE, Yipeng ZHUANG, Leung Ho Philip YU

Research output: Contribution to journalArticlespeer-review

1 Citation (Scopus)

Abstract

Large-scale benchmark datasets are crucial in advancing research within the computer science communities. They enable the development of more sophisticated AI models and serve as \golden" benchmarks for evaluating their performance. Thus, ensuring the quality of these datasets is of utmost importance for academic research and the progress of AI systems. For the emerging vision-language tasks, some datasets have been created and frequently used, such as Flickr30k, COCO, and NoCaps, which typically contain a large number of images paired with their ground-truth textual descriptions. In this paper, an automatic method is proposed to assess the quality of large-scale benchmark datasets designed for vision-language tasks. In particular, a new cross-modal matching model is developed, which is capable of automatically scoring the textual descriptions of visual images. Subsequently, this model is employed to evaluate the quality of vision-language datasets by automatically assigning a score to each `ground-truth' description for every image picture. With a good agreement between manual and automated scoring results on the datasets, our findings reveal significant disparities in the quality of the ground-truth descriptions included in the benchmark datasets. Even more surprising, it is evident that a small portion of the descriptions are unsuitable for serving as reliable ground-truth references. These discoveries emphasize the need for careful utilization of these publicly accessible benchmark databases. Copyright © 2024 The Author(s).

Original languageEnglish
Article number2450009
JournalInternational Journal of Neural Systems
Volume34
Issue number3
Early online dateFeb 2024
DOIs
Publication statusPublished - Mar 2024

Citation

Zhao, R., Xie, Z., Zhuang, Y., & Yu, P. L. H. (2024). Automated quality evaluation of large-scale benchmark datasets for vision-language tasks. International Journal of Neural Systems, 34(3), Article 2450009. https://doi.org/10.1142/S0129065724500096

Keywords

  • Benchmark datasets
  • Quality evaluation
  • Vision-language tasks
  • Automated scoring
  • Cross-modal deep learning
  • PG student publication

Fingerprint

Dive into the research topics of 'Automated quality evaluation of large-scale benchmark datasets for vision-language tasks'. Together they form a unique fingerprint.