Neural Data-to-text Generation: A Comparison Between Pipeline And End-to-end Architectures | Awesome LLM Papers Contribute to Awesome LLM Papers

Neural Data-to-text Generation: A Comparison Between Pipeline And End-to-end Architectures

Thiago Castro Ferreira, Chris van Der Lee, Emiel van Miltenburg, Emiel Krahmer . Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) 2019 – 102 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Uncategorized

Traditionally, most data-to-text applications have been designed using a modular pipeline architecture, in which non-linguistic input data is converted into natural language through several intermediate transformations. In contrast, recent neural models for data-to-text generation have been proposed as end-to-end approaches, where the non-linguistic input is rendered in natural language with much less explicit intermediate representations in-between. This study introduces a systematic comparison between neural pipeline and end-to-end data-to-text approaches for the generation of text from RDF triples. Both architectures were implemented making use of state-of-the art deep learning methods as the encoder-decoder Gated-Recurrent Units (GRU) and Transformer. Automatic and human evaluations together with a qualitative analysis suggest that having explicit intermediate steps in the generation process results in better texts than the ones generated by end-to-end approaches. Moreover, the pipeline models generalize better to unseen inputs. Data and code are publicly available.

Similar Work