Pre-training Transformers As Energy-based Cloze Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Pre-training Transformers As Energy-based Cloze Models

Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning . Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020 – 67 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization EMNLP Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Training Techniques

We introduce Electric, an energy-based cloze model for representation learning over text. Like BERT, it is a conditional generative model of tokens given their contexts. However, Electric does not use masking or output a full distribution over tokens that could occur in a context. Instead, it assigns a scalar energy score to each input token indicating how likely it is given its context. We train Electric using an algorithm based on noise-contrastive estimation and elucidate how this learning objective is closely related to the recently proposed ELECTRA pre-training method. Electric performs well when transferred to downstream tasks and is particularly effective at producing likelihood scores for text: it re-ranks speech recognition n-best lists better than language models and much faster than masked language models. Furthermore, it offers a clearer and more principled view of what ELECTRA learns during pre-training.

Similar Work