Streamlined Dense Video Captioning | Awesome LLM Papers Contribute to Awesome LLM Papers

Streamlined Dense Video Captioning

Jonghwan Mun, Linjie Yang, Zhou Ren, Ning Xu, Bohyung Han . 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019 – 137 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
CVPR Uncategorized

Dense video captioning is an extremely challenging task since accurate and coherent description of events in a video requires holistic understanding of video contents as well as contextual reasoning of individual events. Most existing approaches handle this problem by first detecting event proposals from a video and then captioning on a subset of the proposals. As a result, the generated sentences are prone to be redundant or inconsistent since they fail to consider temporal dependency between events. To tackle this challenge, we propose a novel dense video captioning framework, which models temporal dependency across events in a video explicitly and leverages visual and linguistic context from prior events for coherent storytelling. This objective is achieved by 1) integrating an event sequence generation network to select a sequence of event proposals adaptively, and 2) feeding the sequence of event proposals to our sequential video captioning network, which is trained by reinforcement learning with two-level rewards at both event and episode levels for better context modeling. The proposed technique achieves outstanding performances on ActivityNet Captions dataset in most metrics.

Similar Work