Comparing Attention-based Convolutional And Recurrent Neural Networks: Success And Limitations In Machine Reading Comprehension | Awesome LLM Papers Add your paper to Awesome LLM Papers

Comparing Attention-based Convolutional And Recurrent Neural Networks: Success And Limitations In Machine Reading Comprehension

Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, Ngoc Thang Vu . Proceedings of the 22nd Conference on Computational Natural Language Learning 2018 – 46 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets Interdisciplinary Approaches Neural Machine Translation Question Answering Security Tools Variational Autoencoders

We propose a machine reading comprehension model based on the compare-aggregate framework with two-staged attention that achieves state-of-the-art results on the MovieQA question answering dataset. To investigate the limitations of our model as well as the behavioral difference between convolutional and recurrent neural networks, we generate adversarial examples to confuse the model and compare to human performance. Furthermore, we assess the generalizability of our model by analyzing its differences to human inference,

Similar Work