Abstract
Graph contrastive learning (GCL) has emerged as an effective technology for various graph learning tasks. It has been successfully applied in real-world recommender systems, where the contrastive loss and downstream recommendation objectives are combined to form the overall objective function. However, this approach deviates from the original GCL paradigm, which pre-trains graph embeddings without involving downstream training objectives. In this paper, we propose a novel framework called CPTPP, which enhances GCL-based recommender systems by leveraging prompt tuning. This framework allows us to fully exploit the advantages of the original GCL protocol. Specifically, we first summarize user profiles in graph recommender systems to automatically generate personalized user prompts. These prompts are then combined with pre-trained user embeddings for prompt tuning in downstream tasks. This helps bridge the gap between pre-training and downstream tasks. Our extensive experiments on three benchmark datasets confirm the effectiveness of CPTPP compared to state-of-the-art baselines. Additionally, a visualization experiment illustrates that user embeddings generated by CPTPP have a more uniform distribution, indicating improved modeling capability for user preferences. The implementation code is available online2 for reproducibility. Copyright © 2023 Neural information processing systems foundation.
Original language | English |
---|---|
Title of host publication | Advances in neural information processing systems |
Place of Publication | New Orleans, USA |
Publisher | Neural Information Processing Systems Foundation |
Volume | 36 |
ISBN (Print) | 9781713899921 |
Publication status | Published - 2023 |