OpenCodePapers

zeroshot-video-question-answer-on-msvd-qa

Video Question AnsweringZero-Shot Video Question Answer
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeAccuracyConfidence ScoreModelNameReleaseDate
Tarsier: Recipes for Training and Evaluating Large Video Description Models✓ Link80.34.2Tarsier (34B)2024-06-30
Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams✓ Link80.33.9Flash-VStream2024-06-12
LinVT: Empower Your Image-level Large Language Model to Understand Videos✓ Link80.24.4LinVT-Qwen2-VL (7B)2024-12-06
VILA: On Pre-training for Visual Language Models✓ Link80.1VILA1.5-40B2023-12-12
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning✓ Link79.94.2PLLaVA (34B)2024-04-25
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models✓ Link79.94.1SlowFast-LLaVA-34B2024-07-22
An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering Using a VLM✓ Link79.64.1IG-VLM-34B2024-03-27
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models✓ Link79.44.1TS-LLaVA-34B2024-11-17
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance✓ Link77.14.0PPLLaVA-7B2024-11-04
Elysium: Exploring Object-level Perception in Videos via MLLM✓ Link75.83.7Elysium2024-03-25
MovieChat: From Dense Token to Sparse Memory for Long Video Understanding✓ Link75.22.9MovieChat2023-07-31
ST-LLM: Large Language Models Are Effective Temporal Learners✓ Link74.63.9ST-LLM2024-03-30
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens✓ Link73.92MiniGPT4-video-7B2024-04-04
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization✓ Link73.23.9Video-LaVIT2024-02-05
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding✓ Link72.43.6VideoGPT+2024-06-13
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token✓ Link70.94.0LLaVA-Mini2025-01-07
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection✓ Link70.73.9Video-LLaVA-7B2023-11-16
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark✓ Link70.03.9VideoChat22023-11-28
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models✓ Link70.03.7LLaMA-VID-13B (2 Token)2023-11-28
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models✓ Link69.73.7LLaMA-VID-7B (2 Token)2023-11-28
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding✓ Link69.33.7Chat-UniVi-7B2023-11-14
BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning✓ Link67.03.6BT-Adapter (zero-shot)2023-09-27
BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning✓ Link67.03.6BT-Adapter (zero-shot)2023-09-27
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models✓ Link64.93.3Video-ChatGPT-7B2023-06-08
VideoChat: Chat-Centric Video Understanding✓ Link56.32.8Video Chat-7B2023-05-10
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model✓ Link54.93.1LLaMA Adapter-7B2023-04-28
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding✓ Link51.62.5Video LLaMA-7B2023-06-05
Zero-Shot Video Question Answering via Frozen Bidirectional Language Models✓ Link33.8FrozenBiLM2022-06-16