Are All Languages Equally Hard To Language-model? | Awesome LLM Papers Contribute to Awesome LLM Papers

Are All Languages Equally Hard To Language-model?

Ryan Cotterell, Sabrina J. Mielke, Jason Eisner, Brian Roark . Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) 2018 – 94 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
NAACL Uncategorized

For general modeling methods applied to diverse languages, a natural question is: how well should we expect our models to work on languages with differing typological profiles? In this work, we develop an evaluation framework for fair cross-linguistic comparison of language models, using translated text so that all models are asked to predict approximately the same information. We then conduct a study on 21 languages, demonstrating that in some languages, the textual expression of the information is harder to predict with both (n)-gram and LSTM language models. We show complex inflectional morphology to be a cause of performance differences among languages.

Similar Work