Improving A Neural Semantic Parser By Counterfactual Learning From Human Bandit Feedback | Awesome LLM Papers Add your paper to Awesome LLM Papers

Improving A Neural Semantic Parser By Counterfactual Learning From Human Bandit Feedback

Carolin Lawrence, Stefan Riezler . Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018 – 42 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Content Enrichment Efficiency Interdisciplinary Approaches Tools User Centric Design Variational Autoencoders

Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.

Similar Work