Pre-train, Prompt And Recommendation: A Comprehensive Survey Of Language Modelling Paradigm Adaptations In Recommender Systems | Awesome LLM Papers Add your paper to Awesome LLM Papers

Pre-train, Prompt And Recommendation: A Comprehensive Survey Of Language Modelling Paradigm Adaptations In Recommender Systems

Peng Liu, Lemei Zhang, Jon Atle Gulla . ACM Computing Surveys 2023 – 2138 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Efficiency Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Prompting Question Answering Survey Paper Training Techniques

The emergence of Pre-trained Language Models (PLMs) has achieved tremendous success in the field of Natural Language Processing (NLP) by learning universal representations on large corpora in a self-supervised manner. The pre-trained models and the learned representations can be beneficial to a series of downstream NLP tasks. This training paradigm has recently been adapted to the recommendation domain and is considered a promising approach by both academia and industry. In this paper, we systematically investigate how to extract and transfer knowledge from pre-trained models learned by different PLM-related training paradigms to improve recommendation performance from various perspectives, such as generality, sparsity, efficiency and effectiveness. Specifically, we propose a comprehensive taxonomy to divide existing PLM-based recommender systems w.r.t. their training strategies and objectives. Then, we analyze and summarize the connection between PLM-based training paradigms and different input data types for recommender systems. Finally, we elaborate on open issues and future research directions in this vibrant field.

Similar Work