OpenCodePapers
fine-grained-image-classification-on-oxford-2
Fine-Grained Image Classification
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
Show papers without code
Paper
Code
Accuracy
↕
Top-1 Error Rate
↕
FLOPS
↕
PARAMS
↕
ModelName
ReleaseDate
↕
Sharpness-Aware Minimization for Efficiently Improving Generalization
✓ Link
97.10
2.90%
EffNet-L2 (SAM)
2020-10-03
Big Transfer (BiT): General Visual Representation Learning
✓ Link
96.62
3.38%
BiT-L (ResNet)
2019-12-24
A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems
✓ Link
95.5
µ2Net+ (ViT-L/16)
2022-09-15
An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems
✓ Link
95.3
µ2Net (ViT-L/16)
2022-05-25
Big Transfer (BiT): General Visual Representation Learning
✓ Link
94.47
5.53%
BiT-M (ResNet)
2019-12-24
Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network
✓ Link
94.3%
5.7
Assemble-ResNet-FGVC-50
2020-01-17
Neural Architecture Transfer
✓ Link
94.3
5.7%
744M
8.5M
NAT-M4
2020-05-12
Neural Architecture Transfer
✓ Link
94.1
5.9%
471M
5.7M
NAT-M3
2020-05-12
Neural Architecture Transfer
✓ Link
93.5
6.5%
306M
5.5M
NAT-M2
2020-05-12
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
✓ Link
93.3
ResNet-152-SAM
2021-06-03
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
✓ Link
93.1
ViT-B/16- SAM
2021-06-03
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
✓ Link
92.9
ViT-S/16- SAM
2021-06-03
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
✓ Link
92.5
Mixer-B/16- SAM
2021-06-03
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
✓ Link
91.6
ResNet-50-SAM
2021-06-03
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
✓ Link
88.7
Mixer-S/16- SAM
2021-06-03
Stochastic Subsampling With Average Pooling
86.011
SE-ResNet-101 (SAP)
2024-09-25
How to Use Dropout Correctly on Residual Networks with Batch Normalization
✓ Link
85.5897
PreResNet-101
2023-02-13
On the Ideal Number of Groups for Isometric Gradient Propagation
77.076
ResNet-101 (ideal number of groups)
2023-02-07
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
✓ Link
6.2%
ViT-B/16
2020-10-22