Topological Planning With Transformers For Vision-and-language Navigation | Awesome LLM Papers Contribute to Awesome LLM Papers

Topological Planning With Transformers For Vision-and-language Navigation

Kevin Chen, Junshen K. Chen, Jo Chuang, Marynel Vázquez, Silvio Savarese . 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021 – 63 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
CVPR Model Architecture

Conventional approaches to vision-and-language navigation (VLN) are trained end-to-end but struggle to perform well in freely traversable environments. Inspired by the robotics community, we propose a modular approach to VLN using topological maps. Given a natural language instruction and topological map, our approach leverages attention mechanisms to predict a navigation plan in the map. The plan is then executed with low-level actions (e.g. forward, rotate) using a robust controller. Experiments show that our method outperforms previous end-to-end approaches, generates interpretable navigation plans, and exhibits intelligent behaviors such as backtracking.

Similar Work