Coupling Retrieval And Meta-learning For Context-dependent Semantic Parsing | Awesome LLM Papers Add your paper to Awesome LLM Papers

Coupling Retrieval And Meta-learning For Context-dependent Semantic Parsing

Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, Jian Yin . Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019 – 43 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Datasets Interdisciplinary Approaches Retrieval Systems Training Techniques

In this paper, we present an approach to incorporate retrieved datapoints as supporting evidence for context-dependent semantic parsing, such as generating source code conditioned on the class environment. Our approach naturally combines a retrieval model and a meta-learner, where the former learns to find similar datapoints from the training data, and the latter considers retrieved datapoints as a pseudo task for fast adaptation. Specifically, our retriever is a context-aware encoder-decoder model with a latent variable which takes context environment into consideration, and our meta-learner learns to utilize retrieved datapoints in a model-agnostic meta-learning paradigm for fast adaptation. We conduct experiments on CONCODE and CSQA datasets, where the context refers to class environment in JAVA codes and conversational history, respectively. We use sequence-to-action model as the base semantic parser, which performs the state-of-the-art accuracy on both datasets. Results show that both the context-aware retriever and the meta-learning strategy improve accuracy, and our approach performs better than retrieve-and-edit baselines.

Similar Work