Probabilistic Predictions Of People Perusing: Evaluating Metrics Of Language Model Performance For Psycholinguistic Modeling | Awesome LLM Papers Add your paper to Awesome LLM Papers

Probabilistic Predictions Of People Perusing: Evaluating Metrics Of Language Model Performance For Psycholinguistic Modeling

Yiding Hao, Simon Mendelsohn, Rachel Sterneck, Randi Martinez, Robert Frank . Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics 2020 – 45 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Evaluation Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Training Techniques

By positing a relationship between naturalistic reading times and information-theoretic surprisal, surprisal theory (Hale, 2001; Levy, 2008) provides a natural interface between language models and psycholinguistic models. This paper re-evaluates a claim due to Goodkind and Bicknell (2018) that a language model’s ability to model reading times is a linear function of its perplexity. By extending Goodkind and Bicknell’s analysis to modern neural architectures, we show that the proposed relation does not always hold for Long Short-Term Memory networks, Transformers, and pre-trained models. We introduce an alternate measure of language modeling performance called predictability norm correlation based on Cloze probabilities measured from human subjects. Our new metric yields a more robust relationship between language model quality and psycholinguistic modeling performance that allows for comparison between models with different training configurations.

Similar Work