Multimodal Pretraining Unmasked: A Meta-analysis And A Unified Framework Of Vision-and-language Berts | Awesome LLM Papers Contribute to Awesome LLM Papers

Multimodal Pretraining Unmasked: A Meta-analysis And A Unified Framework Of Vision-and-language Berts

Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, Desmond Elliott . Transactions of the Association for Computational Linguistics 2021 – 90 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Fine Tuning Survey Paper TACL Tools Training Techniques

Large-scale pretraining and task-specific fine-tuning is now the standard methodology for many tasks in computer vision and natural language processing. Recently, a multitude of methods have been proposed for pretraining vision and language BERTs to tackle challenges at the intersection of these two key areas of AI. These models can be categorised into either single-stream or dual-stream encoders. We study the differences between these two categories, and show how they can be unified under a single theoretical framework. We then conduct controlled experiments to discern the empirical differences between five V&L BERTs. Our experiments show that training data and hyperparameters are responsible for most of the differences between the reported results, but they also reveal that the embedding layer plays a crucial role in these massive models.

Similar Work