EQUATE: A Benchmark Evaluation Framework For Quantitative Reasoning In Natural Language Inference | Awesome LLM Papers Contribute to Awesome LLM Papers

EQUATE: A Benchmark Evaluation Framework For Quantitative Reasoning In Natural Language Inference

Abhilasha Ravichander, Aakanksha Naik, Carolyn Rose, Eduard Hovy . Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) 2019 – 76 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

Quantitative reasoning is a higher-order reasoning skill that any intelligent natural language understanding system can reasonably be expected to handle. We present EQUATE (Evaluating Quantitative Understanding Aptitude in Textual Entailment), a new framework for quantitative reasoning in textual entailment. We benchmark the performance of 9 published NLI models on EQUATE, and find that on average, state-of-the-art methods do not achieve an absolute improvement over a majority-class baseline, suggesting that they do not implicitly learn to reason with quantities. We establish a new baseline Q-REAS that manipulates quantities symbolically. In comparison to the best performing NLI model, it achieves success on numerical reasoning tests (+24.2%), but has limited verbal reasoning capabilities (-8.1%). We hope our evaluation framework will support the development of models of quantitative reasoning in language understanding.

Similar Work