Paper | Code | Top 1 Accuracy | ModelName | ReleaseDate |
---|---|---|---|---|
The effectiveness of MAE pre-pretraining for billion-scale pretraining | ✓ Link | 84.6 | MAWS (ViT-6.5B) | 2023-03-23 |
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 84.29 | ViT-MoE-15B (Every-2) | 2021-06-10 |
The effectiveness of MAE pre-pretraining for billion-scale pretraining | ✓ Link | 83.7 | MAWS (ViT-2B) | 2023-03-23 |
The effectiveness of MAE pre-pretraining for billion-scale pretraining | ✓ Link | 82.5 | MAWS (ViT-H) | 2023-03-23 |
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 80.33 | V-MoE-H/14 (Every-2) | 2021-06-10 |
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 80.1 | V-MoE-H/14 (Last-5) | 2021-06-10 |
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 79.01 | VIT-H/14 | 2021-06-10 |