End-to-end Non-autoregressive Neural Machine Translation With Connectionist Temporal Classification | Awesome LLM Papers Contribute to Awesome LLM Papers

End-to-end Non-autoregressive Neural Machine Translation With Connectionist Temporal Classification

Jindřich Libovický, Jindřich Helcl . Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018 – 130 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Uncategorized

Autoregressive decoding is the only part of sequence-to-sequence models that prevents them from massive parallelization at inference time. Non-autoregressive models enable the decoder to generate all output symbols independently in parallel. We present a novel non-autoregressive architecture based on connectionist temporal classification and evaluate it on the task of neural machine translation. Unlike other non-autoregressive methods which operate in several steps, our model can be trained end-to-end. We conduct experiments on the WMT English-Romanian and English-German datasets. Our models achieve a significant speedup over the autoregressive models, keeping the translation quality comparable to other non-autoregressive models.

Similar Work