Paper | Code | Top 1 Accuracy | ModelName | ReleaseDate |
---|---|---|---|---|
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 68.66 | ViT-MoE-15B (Every-2) | 2021-06-10 |
The effectiveness of MAE pre-pretraining for billion-scale pretraining | ✓ Link | 63.6 | MAWS (ViT-6.5B) | 2023-03-23 |
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 63.38 | V-MoE-H/14 (Every-2) | 2021-06-10 |
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 62.95 | V-MoE-H/14 (Last-5) | 2021-06-10 |
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 62.41 | V-MoE-L/16 (Every-2) | 2021-06-10 |
Scaling Vision with Sparse Mixture of Experts | ✓ Link | 62.34 | VIT-H/14 | 2021-06-10 |
The effectiveness of MAE pre-pretraining for billion-scale pretraining | ✓ Link | 62.1 | MAWS (ViT-2B) | 2023-03-23 |
The effectiveness of MAE pre-pretraining for billion-scale pretraining | ✓ Link | 57.1 | MAWS (ViT-H) | 2023-03-23 |