JAKET: Joint Pre-training Of Knowledge Graph And Language Understanding | Awesome LLM Papers Contribute to Awesome LLM Papers

JAKET: Joint Pre-training Of Knowledge Graph And Language Understanding

Donghan Yu, Chenguang Zhu, Yiming Yang, Michael Zeng . Proceedings of the AAAI Conference on Artificial Intelligence 2022 – 93 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Tools Training Techniques

Knowledge graphs (KGs) contain rich information about world knowledge, entities and relations. Thus, they can be great supplements to existing pre-trained language models. However, it remains a challenge to efficiently integrate information from KG into language modeling. And the understanding of a knowledge graph requires related context. We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language. The knowledge module and language module provide essential information to mutually assist each other: the knowledge module produces embeddings for entities in text while the language module generates context-aware initial embeddings for entities and relations in the graph. Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains. Experimental results on several knowledge-aware NLP tasks show that our proposed framework achieves superior performance by effectively leveraging knowledge in language understanding.

Similar Work