Understanding Task Design Trade-offs In Crowdsourced Paraphrase Collection | Awesome LLM Papers Add your paper to Awesome LLM Papers

Understanding Task Design Trade-offs In Crowdsourced Paraphrase Collection

Youxuan Jiang, Jonathan K. Kummerfeld, Walter S. Lasecki . Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2017 – 40 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Content Enrichment Datasets Interdisciplinary Approaches Training Techniques Variational Autoencoders

Linguistically diverse datasets are critical for training and evaluating robust machine learning systems, but data collection is a costly process that often requires experts. Crowdsourcing the process of paraphrase generation is an effective means of expanding natural language datasets, but there has been limited analysis of the trade-offs that arise when designing tasks. In this paper, we present the first systematic study of the key factors in crowdsourcing paraphrase collection. We consider variations in instructions, incentives, data domains, and workflows. We manually analyzed paraphrases for correctness, grammaticality, and linguistic diversity. Our observations provide new insight into the trade-offs between accuracy and diversity in crowd responses that arise as a result of task design, providing guidance for future paraphrase generation procedures.

Similar Work