Abstract
Deep neural networks currently achieve state-of-the-art performance in many multivariate time series classification (MTSC) tasks, which are crucial for various real-world applications. However, the black-box characteristic of deep learning models impedes humans from obtaining insights into the internal regulation and decisions made by classifiers. Existing explainability research generally requires constructing separate explanation models to work with deep learning models or process their results, thus calling for additional development efforts. We propose a novel explanation module pluggable into existing deep neural networks to explore variable importance for explaining MTSC. We evaluate our module with popular deep neural networks on both real-world and synthetic datasets to demonstrate its effectiveness in generating explanations for MTSC. Our experiments also show the module improves the classification accuracy of existing models due to the comprehensive incorporation of temporal features. Copyright © 2022 Springer Nature Switzerland AG.
Original language | English |
---|---|
Title of host publication | AI 2021: Advances in Artificial Intelligence: 34th Australasian Joint Conference, AI 2021, Sydney, NSW, Australia, February 2–4, 2022, proceedings |
Editors | Guodong LONG, Xinghuo YU, Sen WANG |
Place of Publication | Cham |
Publisher | Springer |
Pages | 3-14 |
ISBN (Electronic) | 9783030975463 |
ISBN (Print) | 9783030975456 |
DOIs | |
Publication status | Published - 2022 |