Improving Image Captioning With Better Use Of Captions | Awesome LLM Papers Add your paper to Awesome LLM Papers

Improving Image Captioning With Better Use Of Captions

Zhan Shi, Xu Zhou, Xipeng Qiu, Xiaodan Zhu . Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020 – 73 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation ACL Compositional Generalization Content Enrichment Datasets Ethics & Fairness Evaluation Image Text Integration Interactive Environments Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering Tools Visual Contextualization

Image captioning is a multimodal problem that has drawn extensive attention in both the natural language processing and computer vision community. In this paper, we present a novel image captioning architecture to better explore semantics available in captions and leverage that to enhance both image representation and caption generation. Our models first construct caption-guided visual relationship graphs that introduce beneficial inductive bias using weakly supervised multi-instance learning. The representation is then enhanced with neighbouring and contextual nodes with their textual and visual features. During generation, the model further incorporates visual relationships using multi-task learning for jointly predicting word and object/predicate tag sequences. We perform extensive experiments on the MSCOCO dataset, showing that the proposed framework significantly outperforms the baselines, resulting in the state-of-the-art performance under a wide range of evaluation metrics.

Similar Work