Neural Machine Translation For Low-resource Languages | Awesome LLM Papers Contribute to Awesome LLM Papers

Neural Machine Translation For Low-resource Languages

Robert Östling, Jörg Tiedemann . ACM Computing Surveys 2022 – 175 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Survey Paper Training Techniques

Neural machine translation (NMT) approaches have improved the state of the art in many machine translation settings over the last couple of years, but they require large amounts of training data to produce sensible output. We demonstrate that NMT can be used for low-resource languages as well, by introducing more local dependencies and using word alignments to learn sentence reordering during translation. In addition to our novel model, we also present an empirical evaluation of low-resource phrase-based statistical machine translation (SMT) and NMT to investigate the lower limits of the respective technologies. We find that while SMT remains the best option for low-resource settings, our method can produce acceptable translations with only 70000 tokens of training data, a level where the baseline NMT system fails completely.

Similar Work