OpenCodePapers

zeroshot-video-question-answer-on-msrvtt-qa

Video Question AnsweringZero-Shot Video Question Answer
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeAccuracyConfidence ScoreModelNameReleaseDate
Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams✓ Link72.43.4Flash-VStream2024-06-12
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning✓ Link68.73.6PLLaVA (34B)2024-04-25
Elysium: Exploring Object-level Perception in Videos via MLLM✓ Link67.53.2Elysium2024-03-25
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models✓ Link67.43.7SlowFast-LLaVA-34B2024-07-22
Tarsier: Recipes for Training and Evaluating Large Video Description Models✓ Link66.43.7Tarsier (34B)2024-06-30
LinVT: Empower Your Image-level Large Language Model to Understand Videos✓ Link66.24.0LinVT-Qwen2-VL (7B)2024-12-06
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models✓ Link66.23.6TS-LLaVA-34B2024-11-17
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance✓ Link64.33.5PPLLaVA-7B2024-11-04
An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering Using a VLM✓ Link63.83.5IG-VLM2024-03-27
ST-LLM: Large Language Models Are Effective Temporal Learners✓ Link63.23.4ST-LLM2024-03-30
CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarios✓ Link62.13.5CAT-7B2024-03-07
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding✓ Link60.63.6VideoGPT+2024-06-13
Vista-LLaMA: Reliable Video Narrator via Equal Distance to Visual Tokens60.53.3Vista-LLaMA-7B2023-12-12
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens✓ Link59.73MiniGPT4-video-7B2024-04-04
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token✓ Link59.53.6LLaVA-Mini2025-01-07
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization✓ Link59.33.3Video-LaVIT2024-02-05
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection✓ Link59.23.5Video-LLaVA-7B2023-11-16
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models✓ Link58.93.3LLaMA-VID-13B (2 Token)2023-11-28
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models✓ Link57.73.2LLaMA-VID-7B (2 Token)2023-11-28
Shot2Story20K: A New Benchmark for Comprehensive Understanding of Multi-shot Videos✓ Link56.8SUM-shot+Vicuna2023-12-16
OmniDataComposer: A Unified Data Structure for Multimodal Data Fusion and Infinite Data Generation✓ Link55.33.3Omni-VideoAssistant2023-08-08
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding✓ Link55.03.1Chat-UniVi-7B2023-11-14
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark✓ Link54.13.3VideoChat22023-11-28
MovieChat: From Dense Token to Sparse Memory for Long Video Understanding✓ Link52.72.6MovieChat2023-07-31
BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning✓ Link51.22.9BT-Adapter (zero-shot)2023-09-27
BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning✓ Link51.22.9BT-Adapter (zero-shot)2023-09-27
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models✓ Link49.32.8Video-ChatGPT-7B2023-06-08
VideoChat: Chat-Centric Video Understanding✓ Link45.02.5Video Chat-7B2023-05-10
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model✓ Link43.82.7LLaMA Adapter-7B2023-04-28
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding✓ Link29.61.8Video LLaMA-7B2023-06-05