Answering Questions About Data Visualizations Using Efficient Bimodal Fusion | Awesome LLM Papers Add your paper to Awesome LLM Papers

Answering Questions About Data Visualizations Using Efficient Bimodal Fusion

Kushal Kafle, Robik Shrestha, Brian Price, Scott Cohen, Christopher Kanan . 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020 – 45 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Datasets Question Answering Visual Question Answering

Chart question answering (CQA) is a newly proposed visual question answering (VQA) task where an algorithm must answer questions about data visualizations, e.g. bar charts, pie charts, and line graphs. CQA requires capabilities that natural-image VQA algorithms lack: fine-grained measurements, optical character recognition, and handling out-of-vocabulary words in both questions and answers. Without modifications, state-of-the-art VQA algorithms perform poorly on this task. Here, we propose a novel CQA algorithm called parallel recurrent fusion of image and language (PReFIL). PReFIL first learns bimodal embeddings by fusing question and image features and then intelligently aggregates these learned embeddings to answer the given question. Despite its simplicity, PReFIL greatly surpasses state-of-the art systems and human baselines on both the FigureQA and DVQA datasets. Additionally, we demonstrate that PReFIL can be used to reconstruct tables by asking a series of questions about a chart.

Similar Work