Probing Image-language Transformers For Verb Understanding | Awesome LLM Papers Contribute to Awesome LLM Papers

Probing Image-language Transformers For Verb Understanding

Lisa Anne Hendricks, Aida Nematzadeh . Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021 – 56 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Uncategorized

Multimodal image-language transformers have achieved impressive results on a variety of tasks that rely on fine-tuning (e.g., visual question answering and image retrieval). We are interested in shedding light on the quality of their pretrained representations – in particular, if these models can distinguish different types of verbs or if they rely solely on nouns in a given sentence. To do so, we collect a dataset of image-sentence pairs (in English) consisting of 421 verbs that are either visual or commonly found in the pretraining data (i.e., the Conceptual Captions dataset). We use this dataset to evaluate pretrained image-language transformers and find that they fail more in situations that require verb understanding compared to other parts of speech. We also investigate what category of verbs are particularly challenging.

Similar Work