Ssmix: Saliency-based Span Mixup For Text Classification | Awesome LLM Papers Add your paper to Awesome LLM Papers

Ssmix: Saliency-based Span Mixup For Text Classification

Soyoung Yoon, Gyuwan Kim, Kyumin Park . Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021 – 51 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Has Code Image Text Integration Interdisciplinary Approaches Multimodal Semantic Representation Visual Contextualization

Data augmentation with mixup has shown to be effective on various computer vision tasks. Despite its great success, there has been a hurdle to apply mixup to NLP tasks since text consists of discrete tokens with variable length. In this work, we propose SSMix, a novel mixup method where the operation is performed on input text rather than on hidden vectors like previous approaches. SSMix synthesizes a sentence while preserving the locality of two original texts by span-based mixing and keeping more tokens related to the prediction relying on saliency information. With extensive experiments, we empirically validate that our method outperforms hidden-level mixup methods on a wide range of text classification benchmarks, including textual entailment, sentiment classification, and question-type classification. Our code is available at https://github.com/clovaai/ssmix.

Similar Work