Knowledge-enriched Attention Network With Group-wise Semantic For Visual Storytelling | Awesome LLM Papers Add your paper to Awesome LLM Papers

Knowledge-enriched Attention Network With Group-wise Semantic For Visual Storytelling

Tengpeng Li, Hanli Wang, Bin He, Chang Wen Chen . Proceedings of the AAAI Conference on Artificial Intelligence 2022 – 89 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Datasets Evaluation Interdisciplinary Approaches Model Architecture

As a technically challenging topic, visual storytelling aims at generating an imaginary and coherent story with narrative multi-sentences from a group of relevant images. Existing methods often generate direct and rigid descriptions of apparent image-based contents, because they are not capable of exploring implicit information beyond images. Hence, these schemes could not capture consistent dependencies from holistic representation, impairing the generation of reasonable and fluent story. To address these problems, a novel knowledge-enriched attention network with group-wise semantic model is proposed. Three main novel components are designed and supported by substantial experiments to reveal practical advantages. First, a knowledge-enriched attention network is designed to extract implicit concepts from external knowledge system, and these concepts are followed by a cascade cross-modal attention mechanism to characterize imaginative and concrete representations. Second, a group-wise semantic module with second-order pooling is developed to explore the globally consistent guidance. Third, a unified one-stage story generation model with encoder-decoder structure is proposed to simultaneously train and infer the knowledge-enriched attention network, group-wise semantic module and multi-modal story generation decoder in an end-to-end fashion. Substantial experiments on the popular Visual Storytelling dataset with both objective and subjective evaluation metrics demonstrate the superior performance of the proposed scheme as compared with other state-of-the-art methods.

Similar Work