A Survey On Knowledge-enhanced Pre-trained Language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

A Survey On Knowledge-enhanced Pre-trained Language Models

Chaoqi Zhen, Yanlei Shang, Xiangyu Liu, Yifei Li, Yong Chen, Dell Zhang . IEEE Transactions on Knowledge and Data Engineering 2023 – 75 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Compositional Generalization Content Enrichment Image Text Integration Interactive Environments Interdisciplinary Approaches Interpretability Model Architecture Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering Survey Paper Visual Question Answering

Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs) such as BERT. Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks. By integrating external knowledge into PLMs, \textit{\underline{K}nowledge-\underline{E}nhanced \underline{P}re-trained \underline{L}anguage \underline{M}odels} (KEPLMs) have the potential to overcome the above-mentioned limitations. In this paper, we examine KEPLMs systematically through a series of studies. Specifically, we outline the common types and different formats of knowledge to be integrated into KEPLMs, detail the existing methods for building and evaluating KEPLMS, present the applications of KEPLMs in downstream tasks, and discuss the future research directions. Researchers will benefit from this survey by gaining a quick and comprehensive overview of the latest developments in this field.

Similar Work