OpenCodePapers

visual-question-answering-on-vqa-v2-test-dev-1

Visual Question Answering
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeAccuracyModelNameReleaseDate
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models✓ Link82.30BLIP-2 ViT-G OPT 6.7B (fine-tuned)2023-01-30
CoCa: Contrastive Captioners are Image-Text Foundation Models✓ Link82.3CoCa2022-05-04
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework✓ Link82.0OFA2022-02-07
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models✓ Link81.74BLIP-2 ViT-G OPT 2.7B (fine-tuned)2023-01-30
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models✓ Link81.66BLIP-2 ViT-G FlanT5 XL (fine-tuned)2023-01-30
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video✓ Link81.11mPLUG-22023-02-01
Florence: A New Foundation Model for Computer Vision✓ Link80.16Florence2021-11-22
[]()77.69Aurora (ours, r=64)
Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis✓ Link76.8VK-OOD2023-02-11
LXMERT Model Compression for Visual Question Answering✓ Link70.72LXMERT (low-magnitude pruning)2023-10-23
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs✓ Link56.2LocVLM-L2024-04-11