An extended variational mode decomposition algorithm developed speech emotion recognition performance

David HASON RUDD, Huan HUO, Guandong XU

Research output: Chapter in Book/Report/Conference proceedingChapters

1 Citation (Scopus)

Abstract

Emotion recognition (ER) from speech signals is a robust approach since it cannot be imitated like facial expression or text based sentiment analysis. Valuable information underlying the emotions are significant for human-computer interactions enabling intelligent machines to interact with sensitivity in the real world. Previous ER studies through speech signal processing have focused exclusively on associations between different signal mode decomposition methods and hidden informative features. However, improper decomposition parameter selections lead to informative signal component losses due to mode duplicating and mixing. In contrast, the current study proposes VGG-optiVMD, an empowered variational mode decomposition algorithm, to distinguish meaningful speech features and automatically select the number of decomposed modes and optimum balancing parameter for the data fidelity constraint by assessing their effects on the VGG16 flattening output layer. Various feature vectors were employed to train the VGG16 network on different databases and assess VGG-optiVMD reproducibility and reliability. One, two, and three-dimensional feature vectors were constructed by concatenating Mel-frequency cepstral coefficients, Chromagram, Mel spectrograms, Tonnetz diagrams, and spectral centroids. Results confirmed a synergistic relationship between the fine-tuning of the signal sample rate and decomposition parameters with classification accuracy, achieving state-of-the-art 96.09% accuracy in predicting seven emotions on the Berlin EMO-DB database. Copyright © 2023 The Author(s).

Original languageEnglish
Title of host publicationAdvances in knowledge discovery and data mining: 27th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2023, Osaka, Japan, May 25–28, 2023, proceedings, part III
EditorsHisashi KASHIMA, Tsuyoshi IDE, Wen-Chih PENG
Place of PublicationCham
PublisherSpringer
Pages219-231
ISBN (Electronic)9783031333804
ISBN (Print)9783031333798
DOIs
Publication statusPublished - 2023

Citation

Hason Rudd, D., Huo, H., & Xu, G. (2023). An extended variational mode decomposition algorithm developed speech emotion recognition performance. In H. Kashima, T. Ide, & W.-C. Peng (Eds.), Advances in knowledge discovery and data mining: 27th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2023, Osaka, Japan, May 25–28, 2023, proceedings, part III (pp. 219-231). Springer. https://doi.org/10.1007/978-3-031-33380-4_17

Keywords

  • Speech emotion recognition (SER)
  • Variational mode decomposition (VMD)
  • Sound signal processing
  • Convolutional neural network (CNN)
  • Acoustic features

Fingerprint

Dive into the research topics of 'An extended variational mode decomposition algorithm developed speech emotion recognition performance'. Together they form a unique fingerprint.