Neural Headline Generation With Sentence-wise Optimization | Awesome LLM Papers Add your paper to Awesome LLM Papers

Neural Headline Generation With Sentence-wise Optimization

Ayana, Shiqi Shen, Yu Zhao, Zhiyuan Liu, Maosong Sun . Arxiv 2016 – 41 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Efficiency Evaluation Interdisciplinary Approaches Neural Machine Translation Training Techniques Variational Autoencoders

Recently, neural models have been proposed for headline generation by learning to map documents to headlines with recurrent neural networks. Nevertheless, as traditional neural network utilizes maximum likelihood estimation for parameter optimization, it essentially constrains the expected training objective within word level rather than sentence level. Moreover, the performance of model prediction significantly relies on training data distribution. To overcome these drawbacks, we employ minimum risk training strategy in this paper, which directly optimizes model parameters in sentence level with respect to evaluation metrics and leads to significant improvements for headline generation. Experiment results show that our models outperforms state-of-the-art systems on both English and Chinese headline generation tasks.

Similar Work