Simple Is Not Easy: A Simple Strong Baseline For Textvqa And Textcaps | Awesome LLM Papers Add your paper to Awesome LLM Papers

Simple Is Not Easy: A Simple Strong Baseline For Textvqa And Textcaps

Qi Zhu, Chenyu Gao, Peng Wang, Qi Wu . Proceedings of the AAAI Conference on Artificial Intelligence 2021 – 49 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Applications Has Code Image Text Integration Interdisciplinary Approaches Model Architecture Tools Visual Contextualization Visual Question Answering

Texts appearing in daily scenes that can be recognized by OCR (Optical Character Recognition) tools contain significant information, such as street name, product brand and prices. Two tasks – text-based visual question answering and text-based image captioning, with a text extension from existing vision-language applications, are catching on rapidly. To address these problems, many sophisticated multi-modality encoding frameworks (such as heterogeneous graph structure) are being used. In this paper, we argue that a simple attention mechanism can do the same or even better job without any bells and whistles. Under this mechanism, we simply split OCR token features into separate visual- and linguistic-attention branches, and send them to a popular Transformer decoder to generate answers or captions. Surprisingly, we find this simple baseline model is rather strong – it consistently outperforms state-of-the-art (SOTA) models on two popular benchmarks, TextVQA and all three tasks of ST-VQA, although these SOTA models use far more complex encoding mechanisms. Transferring it to text-based image captioning, we also surpass the TextCaps Challenge 2020 winner. We wish this work to set the new baseline for this two OCR text related applications and to inspire new thinking of multi-modality encoder design. Code is available at https://github.com/ZephyrZhuQi/ssbaseline

Similar Work