Zero-shot Text Classification With Self-training | Awesome LLM Papers Add your paper to Awesome LLM Papers

Zero-shot Text Classification With Self-training

Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, Noam Slonim . Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 2022 – 44 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets EMNLP Fine Tuning Interdisciplinary Approaches Multimodal Semantic Representation Training Techniques

Recent advances in large pretrained language models have increased attention to zero-shot text classification. In particular, models finetuned on natural language inference datasets have been widely adopted as zero-shot classifiers due to their promising results and off-the-shelf availability. However, the fact that such models are unfamiliar with the target task can lead to instability and performance issues. We propose a plug-and-play method to bridge this gap using a simple self-training approach, requiring only the class names along with an unlabeled dataset, and without the need for domain expertise or trial and error. We show that fine-tuning the zero-shot classifier on its most confident predictions leads to significant performance gains across a wide range of text classification tasks, presumably since self-training adapts the zero-shot model to the task at hand.

Similar Work