When And Why Is Unsupervised Neural Machine Translation Useless? | Awesome LLM Papers Add your paper to Awesome LLM Papers

When And Why Is Unsupervised Neural Machine Translation Useless?

Yunsu Kim, Miguel GraΓ§a, Hermann Ney . Arxiv 2020 – 42 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Interdisciplinary Approaches Neural Machine Translation Training Techniques Variational Autoencoders

This paper studies the practicality of the current state-of-the-art unsupervised methods in neural machine translation (NMT). In ten translation tasks with various data settings, we analyze the conditions under which the unsupervised methods fail to produce reasonable translations. We show that their performance is severely affected by linguistic dissimilarity and domain mismatch between source and target monolingual data. Such conditions are common for low-resource language pairs, where unsupervised learning works poorly. In all of our experiments, supervised and semi-supervised baselines with 50k-sentence bilingual data outperform the best unsupervised results. Our analyses pinpoint the limits of the current unsupervised NMT and also suggest immediate research directions.

Similar Work