Discourse-aware Neural Rewards For Coherent Text Generation | Awesome LLM Papers Contribute to Awesome LLM Papers

Discourse-aware Neural Rewards For Coherent Text Generation

Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, Yejin Choi . Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) 2018 – 80 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
NAACL Uncategorized

In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with cross-entropy or with reinforcement learning with commonly used scores as rewards.

Similar Work