Tailor: Generating And Perturbing Text With Semantic Controls | Awesome LLM Papers Add your paper to Awesome LLM Papers

Tailor: Generating And Perturbing Text With Semantic Controls

Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, Matt Gardner . Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022 – 43 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Applications Compositional Generalization Content Enrichment Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering RAG Training Techniques Variational Autoencoders Visual Contextualization

Controlled text perturbation is useful for evaluating and improving model generalizability. However, current techniques rely on training a model for every target perturbation, which is expensive and hard to generalize. We present Tailor, a semantically-controlled text generation system. Tailor builds on a pretrained seq2seq model and produces textual outputs conditioned on control codes derived from semantic representations. We craft a set of operations to modify the control codes, which in turn steer generation towards targeted attributes. These operations can be further composed into higher-level ones, allowing for flexible perturbation strategies. We demonstrate the effectiveness of these perturbations in multiple applications. First, we use Tailor to automatically create high-quality contrast sets for four distinct natural language processing (NLP) tasks. These contrast sets contain fewer spurious artifacts and are complementary to manually annotated ones in their lexical diversity. Second, we show that Tailor perturbations can improve model generalization through data augmentation. Perturbing just 2% of training data leads to a 5.8-point gain on an NLI challenge set measuring reliance on syntactic heuristics.

Similar Work