Natural Language Generation By Hierarchical Decoding With Linguistic Patterns | Awesome LLM Papers Add your paper to Awesome LLM Papers

Natural Language Generation By Hierarchical Decoding With Linguistic Patterns

Shang-Yu Su, Kai-Ling Lo, Yi-Ting Yeh, Yun-Nung Chen . Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) 2018 – 42 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Interdisciplinary Approaches Model Architecture NAACL Neural Machine Translation RAG Training Techniques Variational Autoencoders

Natural language generation (NLG) is a critical component in spoken dialogue systems. Classic NLG can be divided into two phases: (1) sentence planning: deciding on the overall sentence structure, (2) surface realization: determining specific word forms and flattening the sentence structure into a string. Many simple NLG models are based on recurrent neural networks (RNN) and sequence-to-sequence (seq2seq) model, which basically contains an encoder-decoder structure; these NLG models generate sentences from scratch by jointly optimizing sentence planning and surface realization using a simple cross entropy loss training criterion. However, the simple encoder-decoder architecture usually suffers from generating complex and long sentences, because the decoder has to learn all grammar and diction knowledge. This paper introduces a hierarchical decoding NLG model based on linguistic patterns in different levels, and shows that the proposed method outperforms the traditional one with a smaller model size. Furthermore, the design of the hierarchical decoding is flexible and easily-extensible in various NLG systems.

Similar Work