E-vil: A Dataset And Benchmark For Natural Language Explanations In Vision-language Tasks | Awesome LLM Papers Add your paper to Awesome LLM Papers

E-vil: A Dataset And Benchmark For Natural Language Explanations In Vision-language Tasks

Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, Thomas Lukasiewicz . 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021 – 53 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Content Enrichment Datasets Evaluation Frameworks Evaluation Has Code ICCV Model Architecture RAG Tools Variational Autoencoders

Recently, there has been an increasing number of efforts to introduce models capable of generating natural language explanations (NLEs) for their predictions on vision-language (VL) tasks. Such models are appealing, because they can provide human-friendly and comprehensive explanations. However, there is a lack of comparison between existing methods, which is due to a lack of re-usable evaluation frameworks and a scarcity of datasets. In this work, we introduce e-ViL and e-SNLI-VE. e-ViL is a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks. It spans four models and three datasets and both automatic metrics and human evaluation are used to assess model-generated explanations. e-SNLI-VE is currently the largest existing VL dataset with NLEs (over 430k instances). We also propose a new model that combines UNITER, which learns joint embeddings of images and text, and GPT-2, a pre-trained language model that is well-suited for text generation. It surpasses the previous state of the art by a large margin across all datasets. Code and data are available here: https://github.com/maximek3/e-ViL.

Similar Work