Measuring And Improving Consistency In Pretrained Language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Measuring And Improving Consistency In Pretrained Language Models

Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg . Transactions of the Association for Computational Linguistics 2021 – 149 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Content Enrichment Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering TACL

Consistency of a model – that is, the invariance of its behavior under meaning-preserving alternations in its input – is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create ParaRel, a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for 38 relations. Using ParaRel, we show that the consistency of all PLMs we experiment with is poor – though with high variance between relations. Our analysis of the representational spaces of PLMs suggests that they have a poor structure and are currently not suitable for representing knowledge robustly. Finally, we propose a method for improving model consistency and experimentally demonstrate its effectiveness.

Similar Work