Bridging Text And Video: A Universal Multimodal Transformer For Video-audio Scene-aware Dialog | Awesome LLM Papers Contribute to Awesome LLM Papers

Bridging Text And Video: A Universal Multimodal Transformer For Video-audio Scene-aware Dialog

Zekang Li, Zongjia Li, Jinchao Zhang, Yang Feng, Cheng Niu, Jie Zhou . IEEE/ACM Transactions on Audio, Speech, and Language Processing 2021 – 61 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Model Architecture Multimodal Models Transformer

Audio-Visual Scene-Aware Dialog (AVSD) is a task to generate responses when chatting about a given video, which is organized as a track of the 8th Dialog System Technology Challenge (DSTC8). To solve the task, we propose a universal multimodal transformer and introduce the multi-task learning method to learn joint representations among different modalities as well as generate informative and fluent responses. Our method extends the natural language generation pre-trained model to multimodal dialogue generation task. Our system achieves the best performance in both objective and subjective evaluations in the challenge.

Similar Work