Universal Language Model Fine-tuning For Text Classification | Awesome LLM Papers Add your paper to Awesome LLM Papers

Universal Language Model Fine-tuning For Text Classification

Jeremy Howard, Sebastian Ruder . Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018 – 3529 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation ACL Compositional Generalization Datasets Fine Tuning Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Training Techniques Visual Contextualization

Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 18-24% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100x more data. We open-source our pretrained models and code.

Similar Work