Prompt-learning For Fine-grained Entity Typing | Awesome LLM Papers Add your paper to Awesome LLM Papers

Prompt-learning For Fine-grained Entity Typing

Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, Hong-Gee Kim . Findings of the Association for Computational Linguistics: EMNLP 2022 2022 – 79 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization EMNLP Efficiency Few Shot Fine Tuning Interdisciplinary Approaches Multimodal Semantic Representation Prompting Training Techniques

As an effective approach to tune pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using \textit{cloze}-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked language modeling. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on three fine-grained entity typing benchmarks (with up to 86 classes) under fully supervised, few-shot and zero-shot settings show that prompt-learning methods significantly outperform fine-tuning baselines, especially when the training data is insufficient.

Similar Work