Relation-aware Video Reading Comprehension For Temporal Language Grounding | Awesome LLM Papers Add your paper to Awesome LLM Papers

Relation-aware Video Reading Comprehension For Temporal Language Grounding

Jialin Gao, Xin Sun, Mengmeng Xu, Xi Zhou, Bernard Ghanem . Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021 – 45 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Interdisciplinary Approaches Tools

Temporal language grounding in videos aims to localize the temporal span relevant to the given query sentence. Previous methods treat it either as a boundary regression task or a span extraction task. This paper will formulate temporal language grounding into video reading comprehension and propose a Relation-aware Network (RaNet) to address it. This framework aims to select a video moment choice from the predefined answer set with the aid of coarse-and-fine choice-query interaction and choice-choice relation construction. A choice-query interactor is proposed to match the visual and textual information simultaneously in sentence-moment and token-moment levels, leading to a coarse-and-fine cross-modal interaction. Moreover, a novel multi-choice relation constructor is introduced by leveraging graph convolution to capture the dependencies among video moment choices for the best choice selection. Extensive experiments on ActivityNet-Captions, TACoS, and Charades-STA demonstrate the effectiveness of our solution. Codes have been available.

Similar Work