Low Resource Text Classification With Ulmfit And Backtranslation | Awesome LLM Papers Add your paper to Awesome LLM Papers

Low Resource Text Classification With Ulmfit And Backtranslation

Sam Shleifer . Arxiv 2019 – 45 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Compositional Generalization Content Enrichment Datasets Image Text Integration Interactive Environments Multimodal Semantic Representation Training Techniques Variational Autoencoders Visual Contextualization Visual Question Answering

In computer vision, virtually every state-of-the-art deep learning system is trained with data augmentation. In text classification, however, data augmentation is less widely practiced because it must be performed before training and risks introducing label noise. We augment the IMDB movie reviews dataset with examples generated by two families of techniques: random token perturbations introduced by Wei and Zou [2019] and backtranslation – translating to a second language then back to English. In low resource environments, backtranslation generates significant improvement on top of the state of-the-art ULMFit model. A ULMFit model pretrained on wikitext103 and then fine-tuned on only 50 IMDB examples and 500 synthetic examples generated by backtranslation achieves 80.6% accuracy, an 8.1% improvement over the augmentation-free baseline with only 9 minutes of additional training time. Random token perturbations do not yield any improvements but incur equivalent computational cost. The benefits of training with backtranslated examples decreases with the size of the available training data. On the full dataset, neither augmentation technique improves upon ULMFit’s state of the art performance. We address this by using backtranslations as a form of test time augmentation as well as ensembling ULMFit with other models, and achieve small improvements.

Similar Work