Exploring Underexplored Limitations Of Cross-domain Text-to-sql Generalization | Awesome LLM Papers Add your paper to Awesome LLM Papers

Exploring Underexplored Limitations Of Cross-domain Text-to-sql Generalization

Yujian Gan, Xinyun Chen, Matthew Purver . Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021 – 52 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets EMNLP Evaluation Interdisciplinary Approaches Neural Machine Translation Security Training Techniques Variational Autoencoders

Recently, there has been significant progress in studying neural networks for translating text descriptions into SQL queries under the zero-shot cross-domain setting. Despite achieving good performance on some public benchmarks, we observe that existing text-to-SQL models do not generalize when facing domain knowledge that does not frequently appear in the training data, which may render the worse prediction performance for unseen domains. In this work, we investigate the robustness of text-to-SQL models when the questions require rarely observed domain knowledge. In particular, we define five types of domain knowledge and introduce Spider-DK (DK is the abbreviation of domain knowledge), a human-curated dataset based on the Spider benchmark for text-to-SQL translation. NL questions in Spider-DK are selected from Spider, and we modify some samples by adding domain knowledge that reflects real-world question paraphrases. We demonstrate that the prediction accuracy dramatically drops on samples that require such domain knowledge, even if the domain knowledge appears in the training set, and the model provides the correct predictions for related training samples.

Similar Work