Large-scale Representation Learning From Visually Grounded Untranscribed Speech | Awesome LLM Papers Contribute to Awesome LLM Papers

Large-scale Representation Learning From Visually Grounded Untranscribed Speech

Gabriel Ilharco, Yuan Zhang, Jason Baldridge . Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) 2019 – 64 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation

Systems that can associate images with their spoken audio captions are an important step towards visually grounded language learning. We describe a scalable method to automatically generate diverse audio for image captioning datasets. This supports pretraining deep networks for encoding both audio and images, which we do via a dual encoder that learns to align latent representations from both modalities. We show that a masked margin softmax loss for such models is superior to the standard triplet loss. We fine-tune these models on the Flickr8k Audio Captions Corpus and obtain state-of-the-art results—improving recall in the top 10 from 29.6% to 49.5%. We also obtain human ratings on retrieval outputs to better assess the impact of incidentally matching image-caption pairs that were not associated in the data, finding that automatic evaluation substantially underestimates the quality of the retrieved results.

Similar Work