How2: A Large-scale Dataset For Multimodal Language Understanding | Awesome LLM Papers Contribute to Awesome LLM Papers

How2: A Large-scale Dataset For Multimodal Language Understanding

Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Loïc Barrault, Lucia Specia, Florian Metze . Arxiv 2018 – 168 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

In this paper, we introduce How2, a multimodal collection of instructional videos with English subtitles and crowdsourced Portuguese translations. We also present integrated sequence-to-sequence baselines for machine translation, automatic speech recognition, spoken language translation, and multimodal summarization. By making available data and code for several multimodal natural language tasks, we hope to stimulate more research on these and similar challenges, to obtain a deeper understanding of multimodality in language processing.

Similar Work