OpenCodePapers
efficient-vits-on-imagenet-1k-with-deit-t
Image Classification
Efficient ViTs
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
Show papers without code
Paper
Code
Top 1 Accuracy
↕
GFLOPs
↕
ModelName
ReleaseDate
↕
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
✓ Link
72.9
0.8
dTPS
2023-04-21
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
✓ Link
72.9
1.0
MCTF ($r=8$)
2024-03-15
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
✓ Link
72.7
0.7
MCTF ($r=16$)
2024-03-15
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers
✓ Link
72.3
0.8
BAT
2022-11-21
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
✓ Link
72.3
0.8
eTPS
2023-04-21
SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
✓ Link
72.2
1.0
SPViT (1.0G)
2021-12-27
Training data-efficient image transformers & distillation through attention
✓ Link
72.2
1.2
Base (DeiT-T)
2020-12-23
Patch Slimming for Efficient Vision Transformers
72.1
0.6
DPS-ViT
2021-06-05
PPT: Token Pruning and Pooling for Efficient Vision Transformers
✓ Link
72.1
0.8
PPT
2023-10-03
SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
✓ Link
72.1
0.9
SPViT (0.9G)
2021-12-27
Patch Slimming for Efficient Vision Transformers
72.0
0.7
PS-ViT
2021-06-05
Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer
✓ Link
72.0
0.8
EvoViT
2021-08-03
Learned Thresholds Token Merging and Pruning for Vision Transformers
✓ Link
72.0
1.0
LTMP (80%)
2023-07-20
Token Merging: Your ViT But Faster
✓ Link
71.7
0.9
ToMe ($r=8$)
2022-10-17
Learned Thresholds Token Merging and Pruning for Vision Transformers
✓ Link
71.5
0.8
LTMP (60%)
2023-07-20
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
✓ Link
71.4
0.6
MCTF ($r=20$)
2024-03-15
Token Merging: Your ViT But Faster
✓ Link
71.4
0.8
ToMe ($r=12$)
2022-10-17
Token Merging: Your ViT But Faster
✓ Link
70.7
0.6
ToMe ($r=16$)
2022-10-17
Pruning Self-attentions into Convolutional Layers in Single Path
✓ Link
70.7
1.0
SPViT
2021-11-23
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
✓ Link
70.1
0.9
S$^2$ViTE
2021-06-08
Learned Thresholds Token Merging and Pruning for Vision Transformers
✓ Link
69.8
0.7
LTMP (45%)
2023-07-20
Scalable Vision Transformers with Hierarchical Pooling
✓ Link
69.6
0.6
HVT-Ti-1
2021-03-19