Modality-agnostic Attention Fusion For Visual Search With Text Feedback | Awesome LLM Papers Add your paper to Awesome LLM Papers

Modality-agnostic Attention Fusion For Visual Search With Text Feedback

Eric Dodds, Jack Culpepper, Simao Herdade, Yang Zhang, Kofi Boakye . Arxiv 2020 – 46 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Datasets

Image retrieval with natural language feedback offers the promise of catalog search based on fine-grained visual features that go beyond objects and binary attributes, facilitating real-world applications such as e-commerce. Our Modality-Agnostic Attention Fusion (MAAF) model combines image and text features and outperforms existing approaches on two visual search with modifying phrase datasets, Fashion IQ and CSS, and performs competitively on a dataset with only single-word modifications, Fashion200k. We also introduce two new challenging benchmarks adapted from Birds-to-Words and Spot-the-Diff, which provide new settings with rich language inputs, and we show that our approach without modification outperforms strong baselines. To better understand our model, we conduct detailed ablations on Fashion IQ and provide visualizations of the surprising phenomenon of words avoiding “attending” to the image region they refer to.

Similar Work