Scene Graph As Pivoting: Inference-time Image-free Unsupervised Multimodal Machine Translation With Visual Scene Hallucination | Awesome LLM Papers Add your paper to Awesome LLM Papers

Scene Graph As Pivoting: Inference-time Image-free Unsupervised Multimodal Machine Translation With Visual Scene Hallucination

Hao Fei, Qian Liu, Meishan Zhang, Min Zhang, Tat-Seng Chua . Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2023 – 43 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Evaluation Image Text Integration Interdisciplinary Approaches Neural Machine Translation Training Techniques Visual Contextualization

In this work, we investigate a more realistic unsupervised multimodal machine translation (UMMT) setup, inference-time image-free UMMT, where the model is trained with source-text image pairs, and tested with only source-text inputs. First, we represent the input images and texts with the visual and language scene graphs (SG), where such fine-grained vision-language features ensure a holistic understanding of the semantics. To enable pure-text input during inference, we devise a visual scene hallucination mechanism that dynamically generates pseudo visual SG from the given textual SG. Several SG-pivoting based learning objectives are introduced for unsupervised translation training. On the benchmark Multi30K data, our SG-based method outperforms the best-performing baseline by significant BLEU scores on the task and setup, helping yield translations with better completeness, relevance and fluency without relying on paired images. Further in-depth analyses reveal how our model advances in the task setting.

Similar Work