Graph Pre-training For AMR Parsing And Generation | Awesome LLM Papers Add your paper to Awesome LLM Papers

Graph Pre-training For AMR Parsing And Generation

Xuefeng Bai, Yulong Chen, Yue Zhang . Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022 – 48 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Content Enrichment Fine Tuning Interdisciplinary Approaches Multimodal Semantic Representation RAG Tools Training Techniques Variational Autoencoders

Abstract meaning representation (AMR) highlights the core semantic information of text in a graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively. However, PLMs are typically pre-trained on textual data, thus are sub-optimal for modeling structural knowledge. To this end, we investigate graph self-supervised training to improve the structure awareness of PLMs over AMR graphs. In particular, we introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training. We further design a unified framework to bridge the gap between pre-training and fine-tuning tasks. Experiments on both AMR parsing and AMR-to-text generation show the superiority of our model. To our knowledge, we are the first to consider pre-training on semantic graphs.

Similar Work