Generalization Without Systematicity: On The Compositional Skills Of Sequence-to-sequence Recurrent Networks | Awesome LLM Papers Contribute to Awesome LLM Papers

Generalization Without Systematicity: On The Compositional Skills Of Sequence-to-sequence Recurrent Networks

Brenden M. Lake, Marco Baroni . Lake B. M. and Baroni M. (2018). Generalization without systematicity On the compositional skills of sequence-to-sequence recurrent networks. International Conference on Machine Learning (ICML) 2018 – 357 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ICML Training Techniques

Humans can understand and produce new utterances effortlessly, thanks to their compositional skills. Once a person learns the meaning of a new verb “dax,” he or she can immediately understand the meaning of “dax twice” or “sing and dax.” In this paper, we introduce the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences. We then test the zero-shot generalization capabilities of a variety of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence methods. We find that RNNs can make successful zero-shot generalizations when the differences between training and test commands are small, so that they can apply “mix-and-match” strategies to solve the task. However, when generalization requires systematic compositional skills (as in the “dax” example above), RNNs fail spectacularly. We conclude with a proof-of-concept experiment in neural machine translation, suggesting that lack of systematicity might be partially responsible for neural networks’ notorious training data thirst.

Similar Work