Docnli: A Large-scale Dataset For Document-level Natural Language Inference | Awesome LLM Papers Add your paper to Awesome LLM Papers

Docnli: A Large-scale Dataset For Document-level Natural Language Inference

Wenpeng Yin, Dragomir Radev, Caiming Xiong . Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021 – 50 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Datasets Fine Tuning Has Code Interdisciplinary Approaches Question Answering Tools

Natural language inference (NLI) is formulated as a unified framework for solving various NLP problems such as relation extraction, question answering, summarization, etc. It has been studied intensively in the past few years thanks to the availability of large-scale labeled datasets. However, most existing studies focus on merely sentence-level inference, which limits the scope of NLI’s application in downstream NLP problems. This work presents DocNLI – a newly-constructed large-scale dataset for document-level NLI. DocNLI is transformed from a broad range of NLP problems and covers multiple genres of text. The premises always stay in the document granularity, whereas the hypotheses vary in length from single sentences to passages with hundreds of words. Additionally, DocNLI has pretty limited artifacts which unfortunately widely exist in some popular sentence-level NLI datasets. Our experiments demonstrate that, even without fine-tuning, a model pretrained on DocNLI shows promising performance on popular sentence-level benchmarks, and generalizes well to out-of-domain NLP tasks that rely on inference at document granularity. Task-specific fine-tuning can bring further improvements. Data, code, and pretrained models can be found at https://github.com/salesforce/DocNLI.

Similar Work