ODSQA: Open-domain Spoken Question Answering Dataset | Awesome LLM Papers Add your paper to Awesome LLM Papers

ODSQA: Open-domain Spoken Question Answering Dataset

Chia-Hsuan Lee, Shang-Ming Wang, Huan-Cheng Chang, Hung-Yi Lee . 2018 IEEE Spoken Language Technology Workshop (SLT) 2018 – 45 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Image Text Integration Multimodal Semantic Representation Question Answering SLT Training Techniques Visual Contextualization

Reading comprehension by machine has been widely studied, but machine comprehension of spoken content is still a less investigated problem. In this paper, we release Open-Domain Spoken Question Answering Dataset (ODSQA) with more than three thousand questions. To the best of our knowledge, this is the largest real SQA dataset. On this dataset, we found that ASR errors have catastrophic impact on SQA. To mitigate the effect of ASR errors, subword units are involved, which brings consistent improvements over all the models. We further found that data augmentation on text-based QA training examples can improve SQA.

Similar Work