Vision-and-language Navigation: Interpreting Visually-grounded Navigation Instructions In Real Environments | Awesome LLM Papers Contribute to Awesome LLM Papers

Vision-and-language Navigation: Interpreting Visually-grounded Navigation Instructions In Real Environments

Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, Anton van Den Hengel . 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018 – 967 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
CVPR Datasets Evaluation Reinforcement Learning

A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator – a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings – the Room-to-Room (R2R) dataset.

Similar Work