ANTIQUE: A Non-factoid Question Answering Benchmark | Awesome LLM Papers Contribute to Awesome LLM Papers

ANTIQUE: A Non-factoid Question Answering Benchmark

Helia Hashemi, Mohammad Aliannejadi, Hamed Zamani, W. Bruce Croft . Lecture Notes in Computer Science 2020 – 67 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Retrieval Systems

Considering the widespread use of mobile and voice search, answer passage retrieval for non-factoid questions plays a critical role in modern information retrieval systems. Despite the importance of the task, the community still feels the significant lack of large-scale non-factoid question answering collections with real questions and comprehensive relevance judgments. In this paper, we develop and release a collection of 2,626 open-domain non-factoid questions from a diverse set of categories. The dataset, called ANTIQUE, contains 34,011 manual relevance annotations. The questions were asked by real users in a community question answering service, i.e., Yahoo! Answers. Relevance judgments for all the answers to each question were collected through crowdsourcing. To facilitate further research, we also include a brief analysis of the data as well as baseline results on both classical and recently developed neural IR models.

Similar Work