OpenCodePapers

efficient-vits-on-imagenet-1k-with-lv-vit-s

Image ClassificationEfficient ViTs
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeTop 1 AccuracyGFLOPsModelNameReleaseDate
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers✓ Link83.54.9MCTF ($r=8$)2024-03-15
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers✓ Link83.44.2MCTF ($r=12$)2024-03-15
All Tokens Matter: Token Labeling for Training Better Vision Transformers✓ Link83.36.6Base (LV-ViT-S)2021-04-22
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification✓ Link83.35.8DynamicViT (90%)2021-06-03
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification✓ Link83.25.1DynamicViT (80%)2021-06-03
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers✓ Link83.14.7BAT2022-11-21
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention✓ Link83.14.6AS-LV-S (70%)2022-09-28
PPT: Token Pruning and Pooling for Efficient Vision Transformers✓ Link83.14.6PPT2023-10-03
SPViT: Enabling Faster Vision Transformers via Soft Token Pruning✓ Link83.14.3SPViT2021-12-27
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations✓ Link83.04.7EViT (70%)2022-02-16
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification✓ Link83.04.6DynamicViT (70%)2021-06-03
Patch Slimming for Efficient Vision Transformers82.94.5DPS-LV-ViT-S2021-06-05
DiffRate : Differentiable Compression Rate for Efficient Vision Transformers✓ Link82.63.9DiffRate2023-05-29
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention✓ Link82.63.9AS-LV-S (60%)2022-09-28
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers✓ Link82.63.8dTPS2023-04-21
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations✓ Link82.53.9EViT (50%)2022-02-16
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers✓ Link82.53.8eTPS2023-04-21
Patch Slimming for Efficient Vision Transformers82.44.7PS-LV-ViT-S2021-06-05
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers✓ Link82.33.6MCTF ($r=16$)2024-03-15