Abstract
Exploiting label dependency is a key challenge in multi-label learning, and current methods solve this problem mainly by training models on the combination of related labels and original features. However, label dependency cannot be exploited dynamically and mutually in this way. Therefore, we propose a novel paradigm of leveraging label dependency in an iterative way. Specifically, each label's prediction will be updated and also propagated to other labels via an random walk with restart process. Meanwhile, the label propagation is implemented as a supervised learning procedure via optimizing a loss function, thus more appropriate label dependency can be learned. Extensive experiments are conducted, and the results demonstrate that our method can achieve considerable improvements in terms of several evaluation metrics. Copyright © 2013 IEEE.
Original language | English |
---|---|
Title of host publication | Proceedings of IEEE 13th International Conference on Data Mining, ICDM 2013 |
Place of Publication | USA |
Publisher | IEEE |
Pages | 1061-1066 |
ISBN (Print) | 9780768551081 |
DOIs | |
Publication status | Published - 2013 |