Augmenting Neural Networks With First-order Logic | Awesome LLM Papers Add your paper to Awesome LLM Papers

Augmenting Neural Networks With First-order Logic

Tao Li, Vivek Srikumar . Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019 – 70 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Datasets Interdisciplinary Approaches Neural Machine Translation RAG Tools Training Techniques Variational Autoencoders

Today, the dominant paradigm for training neural networks involves minimizing task loss on a large dataset. Using world knowledge to inform a model, and yet retain the ability to perform end-to-end training remains an open question. In this paper, we present a novel framework for introducing declarative knowledge to neural network architectures in order to guide training and prediction. Our framework systematically compiles logical statements into computation graphs that augment a neural network without extra learnable parameters or manual redesign. We evaluate our modeling strategy on three tasks: machine comprehension, natural language inference, and text chunking. Our experiments show that knowledge-augmented networks can strongly improve over baselines, especially in low-data regimes.

Similar Work