Learning Visually Grounded Sentence Representations | Awesome LLM Papers Contribute to Awesome LLM Papers

Learning Visually Grounded Sentence Representations

Douwe Kiela, Alexis Conneau, Allan Jabri, Maximilian Nickel . Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) 2018 – 76 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL NAACL

We introduce a variety of models, trained on a supervised image captioning corpus to predict the image features for a given caption, to perform sentence representation grounding. We train a grounded sentence encoder that achieves good performance on COCO caption and image retrieval and subsequently show that this encoder can successfully be transferred to various NLP tasks, with improved performance over text-only models. Lastly, we analyze the contribution of grounding, and show that word embeddings learned by this system outperform non-grounded ones.

Similar Work