Human Sentence Processing: Recurrence Or Attention? | Awesome LLM Papers Contribute to Awesome LLM Papers

Human Sentence Processing: Recurrence Or Attention?

Danny Merkx, Stefan L. Frank . Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics 2021 – 78 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Model Architecture

Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing. The recently introduced Transformer architecture outperforms RNNs on many natural language processing tasks but little is known about its ability to model human language processing. We compare Transformer- and RNN-based language models’ ability to account for measures of human reading effort. Our analysis shows Transformers to outperform RNNs in explaining self-paced reading times and neural activity during reading English sentences, challenging the widely held idea that human sentence processing involves recurrent and immediate processing and provides evidence for cue-based retrieval.

Similar Work