Dynamic Evaluation Of Neural Sequence Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Dynamic Evaluation Of Neural Sequence Models

Ben Krause, Emmanuel Kahembwe, Iain Murray, Steve Renals . Arxiv 2017 – 69 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Neural Machine Translation Training Techniques

We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.

Similar Work