The Road To Know-where: An Object-and-room Informed Sequential BERT For Indoor Vision-language Navigation | Awesome LLM Papers Add your paper to Awesome LLM Papers

The Road To Know-where: An Object-and-room Informed Sequential BERT For Indoor Vision-language Navigation

Yuankai Qi, Zizheng Pan, Yicong Hong, Ming-Hsuan Yang, Anton van Den Hengel, Qi Wu . 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021 – 50 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Agentic Has Code ICCV Model Architecture

Vision-and-Language Navigation (VLN) requires an agent to find a path to a remote location on the basis of natural-language instructions and a set of photo-realistic panoramas. Most existing methods take the words in the instructions and the discrete views of each panorama as the minimal unit of encoding. However, this requires a model to match different nouns (e.g., TV, table) against the same input view feature. In this work, we propose an object-informed sequential BERT to encode visual perceptions and linguistic instructions at the same fine-grained level, namely objects and words. Our sequential BERT also enables the visual-textual clues to be interpreted in light of the temporal context, which is crucial to multi-round VLN tasks. Additionally, we enable the model to identify the relative direction (e.g., left/right/front/back) of each navigable location and the room type (e.g., bedroom, kitchen) of its current and final navigation goal, as such information is widely mentioned in instructions implying the desired next and final locations. We thus enable the model to know-where the objects lie in the images, and to know-where they stand in the scene. Extensive experiments demonstrate the effectiveness compared against several state-of-the-art methods on three indoor VLN tasks: REVERIE, NDH, and R2R. Project repository: https://github.com/YuankaiQi/ORIST

Similar Work