Syntactic Data Augmentation Increases Robustness To Inference Heuristics · Awesome LLM Papers Contribute to LLM-Bible

Syntactic Data Augmentation Increases Robustness To Inference Heuristics

Junghyun Min, R. Thomas Mccoy, Dipanjan Das, Emily Pitler, Tal Linzen. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020 – 50 citations

[Paper]    
Security Model Architecture Fine-Tuning BERT Training Techniques Evaluation

Pretrained neural models such as BERT, when fine-tuned to perform natural language inference (NLI), often show high accuracy on standard datasets, but display a surprising lack of sensitivity to word order on controlled challenge sets. We hypothesize that this issue is not primarily caused by the pretrained model’s limitations, but rather by the paucity of crowdsourced NLI examples that might convey the importance of syntactic structure at the fine-tuning stage. We explore several methods to augment standard training sets with syntactically informative examples, generated by applying syntactic transformations to sentences from the MNLI corpus. The best-performing augmentation method, subject/object inversion, improved BERT’s accuracy on controlled examples that diagnose sensitivity to word order from 0.28 to 0.73, without affecting performance on the MNLI test set. This improvement generalized beyond the particular construction used for data augmentation, suggesting that augmentation causes BERT to recruit abstract syntactic representations.

Similar Work