Learning To Remember Translation History With A Continuous Cache | Awesome LLM Papers Add your paper to Awesome LLM Papers

Learning To Remember Translation History With A Continuous Cache

Zhaopeng Tu, Yang Liu, Shuming Shi, Tong Zhang . Transactions of the Association for Computational Linguistics 2018 – 187 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Interdisciplinary Approaches Memory & Context Neural Machine Translation TACL

Existing neural machine translation (NMT) models generally translate sentences in isolation, missing the opportunity to take advantage of document-level information. In this work, we propose to augment NMT models with a very light-weight cache-like memory network, which stores recent hidden representations as translation history. The probability distribution over generated words is updated online depending on the translation history retrieved from the memory, endowing NMT models with the capability to dynamically adapt over time. Experiments on multiple domains with different topics and styles show the effectiveness of the proposed approach with negligible impact on the computational cost.

Similar Work