A Survey Of Knowledge Enhanced Pre-trained Models | Awesome LLM Papers Contribute to Awesome LLM Papers

A Survey Of Knowledge Enhanced Pre-trained Models

Jian Yang, Xinyu Hu, Gang Xiao, Yulong Shen . IEEE Transactions on Knowledge and Data Engineering 2023 – 60 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Fine Tuning Interpretability And Explainability Security Survey Paper Training Techniques

Pre-trained language models learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of natural language processing (NLP) after fine-tuning. These models, however, suffer from poor robustness and lack of interpretability. We refer to pre-trained language models with knowledge injection as knowledge-enhanced pre-trained language models (KEPLMs). These models demonstrate deep understanding and logical reasoning and introduce interpretability. In this survey, we provide a comprehensive overview of KEPLMs in NLP. We first discuss the advancements in pre-trained language models and knowledge representation learning. Then we systematically categorize existing KEPLMs from three different perspectives. Finally, we outline some potential directions of KEPLMs for future research.

Similar Work