Not Enough Data? Deep Learning To The Rescue! | Awesome LLM Papers Add your paper to Awesome LLM Papers

Not Enough Data? Deep Learning To The Rescue!

Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, Naama Zwerdling . Proceedings of the AAAI Conference on Artificial Intelligence 2020 – 285 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Compositional Generalization Content Enrichment Datasets Fine Tuning Image Text Integration Interdisciplinary Approaches Multimodal Semantic Representation Training Techniques Variational Autoencoders Visual Contextualization Visual Question Answering

Based on recent advances in natural language modeling and those in text generation capabilities, we propose a novel data augmentation method for text classification tasks. We use a powerful pre-trained neural network model to artificially synthesize new labeled data for supervised learning. We mainly focus on cases with scarce labeled data. Our method, referred to as language-model-based data augmentation (LAMBADA), involves fine-tuning a state-of-the-art language generator to a specific task through an initial training phase on the existing (usually small) labeled data. Using the fine-tuned model and given a class label, new sentences for the class are generated. Our process then filters these new sentences by using a classifier trained on the original data. In a series of experiments, we show that LAMBADA improves classifiers’ performance on a variety of datasets. Moreover, LAMBADA significantly improves upon the state-of-the-art techniques for data augmentation, specifically those applicable to text classification tasks with little data.

Similar Work