OpenCodePapers
efficient-vits-on-imagenet-1k-with-lv-vit-s
Image Classification
Efficient ViTs
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
Show papers without code
Paper
Code
Top 1 Accuracy
↕
GFLOPs
↕
ModelName
ReleaseDate
↕
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
✓ Link
83.5
4.9
MCTF ($r=8$)
2024-03-15
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
✓ Link
83.4
4.2
MCTF ($r=12$)
2024-03-15
All Tokens Matter: Token Labeling for Training Better Vision Transformers
✓ Link
83.3
6.6
Base (LV-ViT-S)
2021-04-22
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
✓ Link
83.3
5.8
DynamicViT (90%)
2021-06-03
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
✓ Link
83.2
5.1
DynamicViT (80%)
2021-06-03
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers
✓ Link
83.1
4.7
BAT
2022-11-21
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
✓ Link
83.1
4.6
AS-LV-S (70%)
2022-09-28
PPT: Token Pruning and Pooling for Efficient Vision Transformers
✓ Link
83.1
4.6
PPT
2023-10-03
SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
✓ Link
83.1
4.3
SPViT
2021-12-27
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
✓ Link
83.0
4.7
EViT (70%)
2022-02-16
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
✓ Link
83.0
4.6
DynamicViT (70%)
2021-06-03
Patch Slimming for Efficient Vision Transformers
82.9
4.5
DPS-LV-ViT-S
2021-06-05
DiffRate : Differentiable Compression Rate for Efficient Vision Transformers
✓ Link
82.6
3.9
DiffRate
2023-05-29
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
✓ Link
82.6
3.9
AS-LV-S (60%)
2022-09-28
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
✓ Link
82.6
3.8
dTPS
2023-04-21
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
✓ Link
82.5
3.9
EViT (50%)
2022-02-16
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
✓ Link
82.5
3.8
eTPS
2023-04-21
Patch Slimming for Efficient Vision Transformers
82.4
4.7
PS-LV-ViT-S
2021-06-05
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
✓ Link
82.3
3.6
MCTF ($r=16$)
2024-03-15