SPARTA: Efficient Open-domain Question Answering Via Sparse Transformer Matching Retrieval · Awesome LLM Papers Contribute to LLM-Bible

SPARTA: Efficient Open-domain Question Answering Via Sparse Transformer Matching Retrieval

Tiancheng Zhao, Xiaopeng Lu, Kyusong Lee. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2020 – 17 citations

[Paper]    
Model Architecture Transformer Interpretability and Explainability Reinforcement Learning Efficiency and Optimization

We introduce SPARTA, a novel neural retrieval method that shows great promise in performance, generalization, and interpretability for open-domain question answering. Unlike many neural ranking methods that use dense vector nearest neighbor search, SPARTA learns a sparse representation that can be efficiently implemented as an Inverted Index. The resulting representation enables scalable neural retrieval that does not require expensive approximate vector search and leads to better performance than its dense counterpart. We validated our approaches on 4 open-domain question answering (OpenQA) tasks and 11 retrieval question answering (ReQA) tasks. SPARTA achieves new state-of-the-art results across a variety of open-domain question answering tasks in both English and Chinese datasets, including open SQuAD, Natuarl Question, CMRC and etc. Analysis also confirms that the proposed method creates human interpretable representation and allows flexible control over the trade-off between performance and efficiency.

Similar Work