Large language models meet causal inference: Semantic-rich dual propensity score for sequential recommendation

Research output: Contribution to journalArticlespeer-review

Abstract

Sequential recommender systems (SRSs) are designed to suggest relevant items to users by analyzing their interaction sequences. However, SRSs often suffer from exposure bias in these sequences due to imbalanced item exposure and varied user activity levels, creating a self-reinforcing loop favoring popular items regardless of their true relevance. Most SRSs only focus on item dependencies to address exposure bias, while overlooking user-side exposure bias and the rich semantics behind interactions. These oversights result in a limited understanding of less active users' preferences and inaccurate preference capture for less exposed items, exacerbating exposure biases. Towards this end, we propose a novel method LLM-enhanced Dual Propensity Score Estimation (LDPE), which synergistically integrates Large Language Models (LLMs) and causal inference. First, LDPE leverages LLMs' superior ability in capturing rich semantics from textual data and then integrates collaborative information to generate debiased semantic-rich LLM-based user/item embeddings. With these debiased item/user embeddings, LDPE estimates time-aware debiased propensity scores from both the item and user sides. These dual propensity scores can fully mitigate exposure bias by considering item popularity, user activity levels, and temporal dynamics. Lastly, LDPE employs the transformer as the backbone of our method, incorporating estimated dual propensity scores for accurately predicting users' true preferences. Extensive experiments show that our LDPE outperforms state-of-the-art baselines in terms of recommendation performance. Copyright © 2025 IEEE.

Original languageEnglish
Pages (from-to)6494-6505
JournalIEEE Transactions on Knowledge and Data Engineering
Volume37
Issue number11
Early online dateSept 2025
DOIs
Publication statusPublished - Nov 2025

Citation

Yu, D., Li, Q., Huang, S., Cao, J., & Xu, G. (2025). Large language models meet causal inference: Semantic-rich dual propensity score for sequential recommendation. IEEE Transactions on Knowledge and Data Engineering, 37(11), 6494-6505. https://doi.org/10.1109/TKDE.2025.3606149

Keywords

  • Sequential recommendation
  • Exposure bias
  • Causal inference
  • Large language model

Fingerprint

Dive into the research topics of 'Large language models meet causal inference: Semantic-rich dual propensity score for sequential recommendation'. Together they form a unique fingerprint.