Abstract
We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features. In general, our superstructure is a tree structure of multiple super latent variables and it is automatically learned from data. When there is only one latent variable in the superstructure, our model reduces to one that assumes the latent features to be generated from a Gaussian mixture model. We call our model the latent tree variational autoencoder (LTVAE). Whereas previous deep learning methods for clustering produce only one partition of data, LTVAE produces multiple partitions of data, each being given by one super latent variable. This is desirable because high dimensional data usually have many different natural facets and can be meaningfully partitioned in multiple ways. Copyright © 2019 ICLR.
Original language | English |
---|---|
Publication status | Published - May 2019 |
Event | The Seventh International Conference on Learning Representations - New Orleans, United States Duration: 06 May 2019 → 09 May 2019 https://iclr.cc/Conferences/2019 |
Conference
Conference | The Seventh International Conference on Learning Representations |
---|---|
Abbreviated title | ICLR 2019 |
Country/Territory | United States |
City | New Orleans |
Period | 06/05/19 → 09/05/19 |
Internet address |
Citation
Li, X., Chen, Z., Poon, L. K. M., & Zhang, N. L. (2019, May). Learning latent superstructures in variational autoencoders for deep multidimensional clustering. Poster presented at the Seventh International Conference on Learning Representations (ICLR 2019), Ernest N. Morial Convention Center, New Orleans, US.Keywords
- Latent tree model
- Variational autoencoder
- Deep learning
- Latent variable model
- Bayesian network
- Structure learning
- Stepwise EM
- Message passing
- Graphical model
- Multidimensional clustering
- Unsupervised learning