OpenCodePapers
visual-navigation-on-room-to-room-1
Visual Navigation
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
Show papers without code
Paper
Code
spl
↕
ModelName
ReleaseDate
↕
Agent Journey Beyond RGB: Unveiling Hybrid Semantic-Spatial Environmental Representations for Vision-and-Language Navigation
✓ Link
0.6383
SUSA
2024-12-09
Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
0.61
Meta-Explore
2023-03-07
BEVBert: Multimodal Map Pre-training for Language-guided Navigation
✓ Link
0.60
BEV-BERT
2022-12-08
Towards Learning a Generalist Model for Embodied Navigation
✓ Link
0.60
NaviLLM
2023-12-04
HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation
✓ Link
0.59
HOP
2022-03-22
VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation
✓ Link
0.58
VLN-PETL
2023-08-20
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation
✓ Link
0.58
DUET
2022-02-23
A Recurrent Vision-and-Language BERT for Navigation
✓ Link
0.57
VLN-BERT
2020-11-26
Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
✓ Link
0.51
Prevalent
2020-02-25
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation
0.38
RCM+SIL(no early exploration)
2018-11-25
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
✓ Link
0.18
Seq2Seq baseline
2017-11-20