CUNI System For The WMT18 Multimodal Translation Task | Awesome LLM Papers Contribute to Awesome LLM Papers

CUNI System For The WMT18 Multimodal Translation Task

Jindřich Helcl, Jindřich Libovický, Dušan Variš . Proceedings of the Third Conference on Machine Translation: Shared Task Papers 2018 – 60 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized WMT

We present our submission to the WMT18 Multimodal Translation Task. The main feature of our submission is applying a self-attentive network instead of a recurrent neural network. We evaluate two methods of incorporating the visual features in the model: first, we include the image representation as another input to the network; second, we train the model to predict the visual features and use it as an auxiliary objective. For our submission, we acquired both textual and multimodal additional data. Both of the proposed methods yield significant improvements over recurrent networks and self-attentive textual baselines.

Similar Work