Conversational Recommender Systems (CRSs) fundamentally differ from traditional recommender systems by interacting with users in a conversational session to accurately predict their current preferences and provide personalized recommendations. Although current CRSs have achieved favorable recommendation performance, the explainability is still in its infancy stage. Most of the CRSs tend to provide coarse explanations and fail to explore the impact of minimal alterations on the recommendation decisions on items. In this paper, we are the first to incorporate the counterfactual techniques into CRS and propose a Counterfactual Explainable Conversational Recommender (CECR) to enhance the recommendation model from a counterfactual perspective. Counterfactual explanations can offer fine-grained reasons to explain users' real-time intentions, meanwhile generating counterfactual samples for augmenting the training dataset to enhance recommendation performance. Specifically, CECR adaptively learns users' preferences based on the conversation context and effectively responds to users' real-time feedback during multiple rounds of conversation. Furthermore, CECR actively generates counterfactual samples to augment the training set and thus leading to a constant improvement in recommendation performance. Copyright © 2023 IEEE.
|IEEE Transactions on Knowledge and Data Engineering
|Early online date
|E-pub ahead of print - Oct 2023
CitationYu, D., Li, Q., Wang, X., Li, Q., & Xu, G. (2023). Counterfactual explainable conversational recommendation. IEEE Transactions on Knowledge and Data Engineering. Advance online publication. https://doi.org/10.1109/TKDE.2023.3322403
- Interactive recommendations
- Conversational Recommender System
- Counterfactual explainability