Learning latent superstructures in variational autoencoders for deep multidimensional clustering

Xiaopeng LI, Zhourong CHEN, Kin Man POON, Nevin L. ZHANG

Research output: Other contribution

Abstract

We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features. In general, our superstructure is a tree structure of multiple super latent variables and it is automatically learned from data. When there is only one latent variable in the superstructure, our model reduces to one that assumes the latent features to be generated from a Gaussian mixture model. We call our model the latent tree variational autoencoder (LTVAE). Whereas previous deep learning methods for clustering produce only one partition of data, LTVAE produces multiple partitions of data, each being given by one super latent variable. This is desirable because high dimensional data usually have many different natural facets and can be meaningfully partitioned in multiple ways. Copyright © 2019 ICLR.
Original languageEnglish
Publication statusPublished - May 2019

Fingerprint

Deep learning

Citation

Li, X., Chen, Z., Poon, L. K. M., & Zhang, N. L. (2019, May). Learning latent superstructures in variational autoencoders for deep multidimensional clustering. Poster presented at the Seventh International Conference on Learning Representations (ICLR 2019), Ernest N. Morial Convention Center, New Orleans, US.

Keywords

  • Latent tree model
  • Variational autoencoder
  • Deep learning
  • Latent variable model
  • Bayesian network
  • Structure learning
  • Stepwise EM
  • Message passing
  • Graphical model
  • Multidimensional clustering
  • Unsupervised learning