Abstract
Topic models have been widely used for learning the latent explainable representation of documents, but most of the existing approaches discover topics in a flat structure. In this study, we propose an effective hierarchical neural topic model with strong interpretability. Unlike the previous neural topic models, we explicitly model the dependency between layers of a network, and then combine latent variables of different layers to reconstruct documents. Utilizing this network structure, our model can extract a tree-shaped topic hierarchy with low redundancy and good explainability by exploiting dependency matrices. Furthermore, we introduce manifold regularization into the proposed method to improve the robustness of topic modeling. Experiments on real-world datasets validate that our model outperforms other topic models in several widely used metrics with much fewer computation costs. Copyright © 2021 The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Original language | English |
---|---|
Pages (from-to) | 2139-2160 |
Journal | World Wide Web |
Volume | 24 |
Issue number | 6 |
Early online date | 15 Oct 2021 |
DOIs | |
Publication status | Published - Nov 2021 |
Citation
Chen, Z., Ding, C., Rao, Y., Xie, H., Tao, X., Cheng, G., & Wang, F. L. (2021). Hierarchical neural topic modeling with manifold regularization. World Wide Web, 24(6), 2139-2160. doi: 10.1007/s11280-021-00963-7Keywords
- Neural topic modeling
- Hierarchical structure
- Tree network
- Manifold regularization