Machine Comprehension By Text-to-text Neural Question Generation | Awesome LLM Papers Contribute to Awesome LLM Papers

Machine Comprehension By Text-to-text Neural Question Generation

Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, Adam Trischler . Proceedings of the 2nd Workshop on Representation Learning for NLP 2017 – 166 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD.

Similar Work