Paraphrase Generation With Deep Reinforcement Learning | Awesome LLM Papers Contribute to Awesome LLM Papers

Paraphrase Generation With Deep Reinforcement Learning

Zichao Li, Xin Jiang, Lifeng Shang, Hang Li . Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018 – 204 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications EMNLP Evaluation Reinforcement Learning Tools Training Techniques

Automatic generation of paraphrases from a given sentence is an important yet challenging task in natural language processing (NLP), and plays a key role in a number of applications such as question answering, search, and dialogue. In this paper, we present a deep reinforcement learning approach to paraphrase generation. Specifically, we propose a new framework for the task, which consists of a \textit{generator} and an \textit{evaluator}, both of which are learned from data. The generator, built as a sequence-to-sequence learning model, can produce paraphrases given a sentence. The evaluator, constructed as a deep matching model, can judge whether two sentences are paraphrases of each other. The generator is first trained by deep learning and then further fine-tuned by reinforcement learning in which the reward is given by the evaluator. For the learning of the evaluator, we propose two methods based on supervised learning and inverse reinforcement learning respectively, depending on the type of available training data. Empirical study shows that the learned evaluator can guide the generator to produce more accurate paraphrases. Experimental results demonstrate the proposed models (the generators) outperform the state-of-the-art methods in paraphrase generation in both automatic evaluation and human evaluation.

Similar Work