End-to-end latent-variable task-oriented dialogue system with exact log-likelihood optimization

Haotian XU, Haiyun PENG, Haoran XIE, Erik CAMBRIA, Liuyang ZHOU, Weiguo ZHENG

Research output: Contribution to journalArticlespeer-review

22 Citations (Scopus)

Abstract

We propose an end-to-end dialogue model based on a hierarchical encoder-decoder, which employed a discrete latent variable to learn underlying dialogue intentions. The system is able to model the structure of utterances dominated by statistics of the language and the dependencies among utterances in dialogues without manual dialogue state design. We argue that the latent discrete variable interprets the intentions that guide machine responses generation. We also propose a model which can be refined autonomously with reinforcement learning, due to that intention selection at each dialogue turn can be formulated as a sequential decision-making process. Our experiments show that exact MLE optimized model is much more robust than neural variational inference on dialogue success rate with limited BLEU sacrifice. Copyright © 2019 Springer Science+Business Media, LLC, part of Springer Nature.
Original languageEnglish
Pages (from-to)1989-2002
JournalWorld Wide Web
Volume23
Issue number3
Early online date07 Jun 2019
DOIs
Publication statusPublished - May 2020

Citation

Xu, H., Peng, H., Xie, H., Cambria, E., Zhou, L., & Zheng, W. (2020). End-to-end latent-variable task-oriented dialogue system with exact log-likelihood optimization. World Wide Web, 23(3), 1989-2002. doi: 10.1007/s11280-019-00688-8

Keywords

  • Dialogue model
  • Hierarchical encoder-decoder
  • Log-likelihood optimization
  • Dialogue intention

Fingerprint

Dive into the research topics of 'End-to-end latent-variable task-oriented dialogue system with exact log-likelihood optimization'. Together they form a unique fingerprint.