Towards Few-shot Fact-checking Via Perplexity | Awesome LLM Papers Add your paper to Awesome LLM Papers

Towards Few-shot Fact-checking Via Perplexity

Nayeon Lee, Yejin Bang, Andrea Madotto, Madian Khabsa, Pascale Fung . Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2021 – 63 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Datasets Few Shot Fine Tuning Image Text Integration Interdisciplinary Approaches Multimodal Semantic Representation NAACL Neural Machine Translation Training Techniques Visual Contextualization

Few-shot learning has drawn researchers’ attention to overcome the problem of data scarcity. Recently, large pre-trained language models have shown great performance in few-shot learning for various downstream tasks, such as question answering and machine translation. Nevertheless, little exploration has been made to achieve few-shot learning for the fact-checking task. However, fact-checking is an important problem, especially when the amount of information online is growing exponentially every day. In this paper, we propose a new way of utilizing the powerful transfer learning ability of a language model via a perplexity score. The most notable strength of our methodology lies in its capability in few-shot learning. With only two training samples, our methodology can already outperform the Major Class baseline by more than absolute 10% on the F1-Macro metric across multiple datasets. Through experiments, we empirically verify the plausibility of the rather surprising usage of the perplexity score in the context of fact-checking and highlight the strength of our few-shot methodology by comparing it to strong fine-tuning-based baseline models. Moreover, we construct and publicly release two new fact-checking datasets related to COVID-19.

Similar Work