OpenCodePapers

zeroshot-video-question-answer-on-activitynet

Video Question AnsweringZero-Shot Video Question Answer
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeAccuracyConfidence ScoreModelNameReleaseDate
Tarsier: Recipes for Training and Evaluating Large Video Description Models✓ Link61.63.7Tarsier (34B)2024-06-30
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning✓ Link60.93.7PLLaVA (34B)2024-04-25
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance✓ Link60.73.6PPLLaVA-7B2024-11-04
LinVT: Empower Your Image-level Large Language Model to Understand Videos✓ Link60.13.6LinVT-Qwen2-VL(7B)2024-12-06
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models✓ Link59.23.5SlowFast-LLaVA-34B2024-07-22
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models✓ Link58.93.5TS-LLaVA-34B2024-11-17
An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering Using a VLM✓ Link58.43.5IG-VLM2024-03-27
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token✓ Link53.53.5LLaVA-Mini2025-01-07
Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams✓ Link51.93.4Flash-VStream2024-06-12
ST-LLM: Large Language Models Are Effective Temporal Learners✓ Link50.93.3ST-LLM2024-03-30
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding✓ Link50.63.6VideoGPT+2024-06-13
CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarios✓ Link50.23.5CAT-7B2024-03-07
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization✓ Link50.13.3Video-LaVIT2024-02-05
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark✓ Link49.13.3VideoChat22023-11-28
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models✓ Link47.53.3LLaMA-VID-13B (2 Token)2023-11-28
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models✓ Link47.43.3LLaMA-VID-7B (2 Token)2023-11-28
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding✓ Link46.43.6Chat-UniVi-13B2023-11-14
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens✓ Link46.3MiniGPT4-video-7B2024-04-04
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding✓ Link46.13.3Chat-UniVi2023-11-14
BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning✓ Link46.13.2BT-Adapter (zero-shot)2023-09-27
MovieChat: From Dense Token to Sparse Memory for Long Video Understanding✓ Link45.73.1MovieChat2023-07-31
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection✓ Link45.33.3Video-LLaVA2023-11-16
Elysium: Exploring Object-level Perception in Videos via MLLM✓ Link43.42.9Elysium2024-03-25
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models✓ Link35.22.7Video-ChatGPT2023-06-08
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model✓ Link34.22.7LLaMA Adapter2023-04-28
VideoChat: Chat-Centric Video Understanding✓ Link26.52.2Video Chat2023-05-10
Zero-Shot Video Question Answering via Frozen Bidirectional Language Models✓ Link24.7-FrozenBiLM2022-06-16
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding✓ Link12.41.1Video LLaMA2023-06-05