Training A Ranking Function For Open-domain Question Answering | Awesome LLM Papers Add your paper to Awesome LLM Papers

Training A Ranking Function For Open-domain Question Answering

Phu Mon Htut, Samuel R. Bowman, Kyunghyun Cho . Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop 2018 – 41 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Content Enrichment Interdisciplinary Approaches NAACL Question Answering RAG Training Techniques Variational Autoencoders Visual Contextualization Visual Question Answering

In recent years, there have been amazing advances in deep learning methods for machine reading. In machine reading, the machine reader has to extract the answer from the given ground truth paragraph. Recently, the state-of-the-art machine reading models achieve human level performance in SQuAD which is a reading comprehension-style question answering (QA) task. The success of machine reading has inspired researchers to combine information retrieval with machine reading to tackle open-domain QA. However, these systems perform poorly compared to reading comprehension-style QA because it is difficult to retrieve the pieces of paragraphs that contain the answer to the question. In this study, we propose two neural network rankers that assign scores to different passages based on their likelihood of containing the answer to a given question. Additionally, we analyze the relative importance of semantic similarity and word level relevance matching in open-domain QA.

Similar Work