OpenCodePapers

image-classification-on-cifar-10

Image Classification
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodePercentage correctTop-1 AccuracyAccuracyParametersTop 1 AccuracyF1Cross Entropy LossModelNameReleaseDate
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale✓ Link99.5ViT-H/142020-10-22
DINOv2: Learning Robust Visual Features without Supervision✓ Link99.5DINOv2 (ViT-g/14, frozen model, linear eval)2023-04-14
An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems✓ Link99.49µ2Net (ViT-L/16)2022-05-25
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale✓ Link99.42ViT-L/162020-10-22
Going deeper with Image Transformers✓ Link99.4CaiT-M-36 U 2242021-03-31
CvT: Introducing Convolutions to Vision Transformers✓ Link99.39CvT-W242021-03-29
Big Transfer (BiT): General Visual Representation Learning✓ Link99.37BiT-L (ResNet)2019-12-24
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs✓ Link99.31RDNet-L (224 res, IN-1K pretrained)2024-03-28
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs✓ Link99.31RDNet-B (224 res, IN-1K pretrained)2024-03-28
Three things everyone should know about Vision Transformers✓ Link99.3ViT-B (attn fine-tune)2022-03-18
An Algorithm for Routing Vectors in Sequences✓ Link99.2Heinsen Routing + BEiT-large 16 2242022-11-20
Perturbated Gradients Updating within Unit Space for Deep Learning✓ Link99.13ViT-B/16 (PUGD)2021-10-01
Astroformer: More Data Might not be all you need for Classification✓ Link99.1299.12Astroformer2023-04-03
Training data-efficient image transformers & distillation through attention✓ Link99.1DeiT-B2020-12-23
Transformer in Transformer✓ Link99.1TNT-B2021-02-27
Incorporating Convolution Designs into Visual Transformers✓ Link99.1CeiT-S (384 finetune resolution)2021-03-22
EfficientNetV2: Smaller Models and Faster Training✓ Link99.1EfficientNetV2-L2021-04-01
AutoFormer: Searching Transformers for Visual Recognition✓ Link99.1AutoFormer-S | 3842021-07-01
Reduction of Class Activation Uncertainty with Background Information✓ Link99.05VIT-L/16 (Spinal FC, Background)2023-05-05
Sample-Efficient Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search✓ Link99.03LaNet2019-01-01
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism✓ Link99GPIPE + transfer learning2018-11-16
TResNet: High Performance GPU-Dedicated Architecture✓ Link99TResNet-XL2020-03-30
Incorporating Convolution Designs into Visual Transformers✓ Link99CeiT-S2021-03-22
EfficientNetV2: Smaller Models and Faster Training✓ Link99.0EfficientNetV2-M2021-04-01
Global Filter Networks for Image Classification✓ Link99.0GFNet-H-B2021-07-01
Big Transfer (BiT): General Visual Representation Learning✓ Link98.91BiT-M (ResNet)2019-12-24
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks✓ Link98.9EfficientNet-B72019-05-28
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs✓ Link98.88RDNet-T (224 res, IN-1K pretrained)2024-03-28
Adaptive Split-Fusion Transformer✓ Link98.8%ASF-former-B2022-04-26
Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training✓ Link98.71PyramidNet-272, S=42020-11-30
EfficientNetV2: Smaller Models and Faster Training✓ Link98.7EfficientNetV2-S2021-04-01
Adaptive Split-Fusion Transformer✓ Link98.7ASF-former-S2022-04-26
ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks✓ Link98.68PyramidNet-272 (ASAM)2021-02-23
FMix: Enhancing Mixed Sample Data Augmentation✓ Link98.64PyramidNet + ShakeDrop + Fast AA + FMix2020-02-27
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations✓ Link98.6ViT-B/16- SAM2021-06-03
ConvMLP: Hierarchical Convolutional MLPs for Vision✓ Link98.6ConvMLP-M2021-09-09
ConvMLP: Hierarchical Convolutional MLPs for Vision✓ Link98.6ConvMLP-L2021-09-09
Not All Images are Worth 16x16 Words: Dynamic Transformers for Efficient Image Recognition✓ Link98.53DVT (T2T-ViT-24)2021-05-31
Rethinking Recurrent Neural Networks and Other Improvements for Image Classification✓ Link98.52E2E-3M2020-07-30
Incorporating Convolution Designs into Visual Transformers✓ Link98.5CeiT-T2021-03-22
Neural Architecture Transfer✓ Link98.498.46.9MNAT-M42020-05-12
Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training✓ Link98.38WRN-40-10, S=42020-11-30
Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training✓ Link98.32WRN-28-10, S=42020-11-30
Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training✓ Link98.31Shake-Shake 26 2x96d, S=42020-11-30
PSO-Convolutional Neural Networks with Heterogeneous Learning Rate✓ Link98.31Dynamics 22022-05-20
Fast AutoAugment✓ Link98.3PyramidNet+ShakeDrop (Fast AA)2019-05-01
ResNet strikes back: An improved training procedure in timm✓ Link98.3ResNet50 (A1)2021-10-01
Noisy Differentiable Architecture Search✓ Link98.28NoisyDARTS-A-t2020-05-07
Neural Architecture Transfer✓ Link98.298.26.2MNAT-M32020-05-12
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference✓ Link98.2LeViT-1922021-04-02
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations✓ Link98.2ResNet-152-SAM2021-06-03
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations✓ Link98.2ViT-S/16- SAM2021-06-03
Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy✓ Link98.2Bamboo (ViT-B/16)2022-03-15
Learning Hyperparameters via a Data-Emphasized Variational Objective✓ Link98.2DE ELBo (ViT-B/16)2025-02-03
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference✓ Link98.1LeViT-2562021-04-02
Regularizing Neural Networks via Adversarial Model Perturbation✓ Link98.02PyramidNet + AA (AMP)2020-10-10
EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations✓ Link98.01EnAET2019-11-21
MUXConv: Information Multiplexing in Convolutional Neural Networks✓ Link98.098.0MUXNet-m2020-03-31
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference✓ Link98LeViT-3842021-04-02
Escaping the Big Data Paradigm with Compact Transformers✓ Link98CCT-7/3x1*2021-04-12
ConvMLP: Hierarchical Convolutional MLPs for Vision✓ Link98ConvMLP-S2021-09-09
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware✓ Link97.92Proxyless-G + c/o2018-12-02
Neural Architecture Transfer✓ Link97.997.94.6MNAT-M22020-05-12
AutoDropout: Learning Dropout Patterns to Regularize Deep Networks✓ Link97.9WRN-28-10+AutoDropout+RandAugment2021-01-05
Squeeze-and-Excitation Networks✓ Link97.88SENet + ShakeShake + Cutout2017-09-05
Gated Convolutional Networks with Hybrid Connectivity for Image Classification✓ Link97.86HCGNet-A32019-08-26
Automatic Data Augmentation via Invariance-Constrained Learning✓ Link97.85Wide-ResNet-28-102022-09-29
AutoMix: Unveiling the Power of Mixup for Stronger Classifiers✓ Link97.84ResNeXt-50 (AutoMix)2021-03-24
Effect of Pre-Training Scale on Intra- and Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest Images✓ Link97.82ResNet-152x4-AGC (ImageNet-21K)2021-05-31
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations✓ Link97.8Mixer-B/16- SAM2021-06-03
TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers✓ Link97.78CCT-7/3x1+VTM2022-10-14
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks✓ Link97.73WRN-28-102021-03-10
Gated Convolutional Networks with Hybrid Connectivity for Image Classification✓ Link97.71HCGNet-A22019-08-26
Fixup Initialization: Residual Learning Without Normalization✓ Link97.7WRN + fixup init + mixup + cutout2019-01-27
Noisy Differentiable Architecture Search✓ Link97.61NoisyDARTS-a2020-05-07
TransBoost: Improving the Best ImageNet Performance using Deep Transduction✓ Link97.61TransBoost-ResNet502022-05-26
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference✓ Link97.6LeViT-1282021-04-02
batchboost: regularization for stabilizing training with resistance to underfitting & overfitting✓ Link97.54DenseNet-BC-190 + batchboost2020-01-21
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference✓ Link97.5LeViT-128S2021-04-02
Learning Implicitly Recurrent CNNs Through Parameter Sharing✓ Link97.47Shared WRN2019-02-26
Manifold Mixup: Better Representations by Interpolating Hidden States✓ Link97.45Manifold Mixup WRN 28-102018-06-13
Neural networks with late-phase weights✓ Link97.45WRN 28-142020-07-25
SparseSwin: Swin Transformer with Sparse Transformer Block✓ Link97.43SparseSwin2023-09-11
Non-convex Learning via Replica Exchange Stochastic Gradient MCMC✓ Link97.42WRN-28-10 with reSGHMC2020-08-12
Neural Architecture Transfer✓ Link97.497.44.3MNAT-M12020-05-12
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations✓ Link97.4ResNet-50-SAM2021-06-03
mixup: Beyond Empirical Risk Minimization✓ Link97.3DenseNet-BC-190 + Mixup2017-10-25
Revisiting a kNN-based Image Classification System with High-capacity Storage97.3kNN-CLIP2022-04-03
WaveMix: A Resource-efficient Neural Network for Image Analysis✓ Link97.29WaveMixLite-144/72022-05-28
Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding✓ Link97.2Transformer local-attention (NesT-B)2021-05-26
Averaging Weights Leads to Wider Optima and Better Generalization✓ Link97.12ShakeShake-2x64d + SWA2018-03-14
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features✓ Link97.12PyramidNet-200 + CutMix2019-05-13
Automatic Data Augmentation via Invariance-Constrained Learning✓ Link97.05Wide-ResNet-40-22022-09-29
Oriented Response Networks✓ Link97.02ORN2017-01-07
Non-convex Learning via Replica Exchange Stochastic Gradient MCMC✓ Link96.87WRN-16-8 with reSGHMC2020-08-12
XnODR and XnIDR: Two Accurate and Fast Fully Connected Layers For Convolutional Neural Networks✓ Link96.87ResNet_XnIDR2021-11-21
Gated Convolutional Networks with Hybrid Connectivity for Image Classification✓ Link96.85HCGNet-A12019-08-26
Neural networks with late-phase weights✓ Link96.81WRN 28-102020-07-25
AutoDropout: Learning Dropout Patterns to Regularize Deep Networks✓ Link96.8AutoDropout2021-01-05
Averaging Weights Leads to Wider Optima and Better Generalization✓ Link96.79WRN-28-10 + SWA2018-03-14
Patches Are All You Need?✓ Link96.74ConvMixer-256/162022-01-24
EXACT: How to Train Your Accuracy✓ Link96.73EXACT (WRN-28-10)2022-05-19
Single-bit-per-weight deep convolutional neural networks without batch-normalization layers for embedded systems✓ Link96.71Wide ResNet+cutout2019-07-16
Deep Pyramidal Residual Networks✓ Link96.69Deep pyramidal residual network2016-10-10
Deep Competitive Pathway Networks✓ Link96.62CoPaNet-R-1642017-09-29
Densely Connected Convolutional Networks✓ Link96.54DenseNet (DenseNet-BC-190)2016-08-25
Selective Kernel Networks✓ Link96.53SKNet-29 (ResNeXt-29, 16×32d)2019-03-15
Fractional Max-Pooling✓ Link96.5Fractional MP2014-12-18
PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions✓ Link96.5PDO-eConv (p8, 4.6M)2020-07-20
UPANets: Learning from the Universal Pixel Attention Networks✓ Link96.47UPANets2021-03-15
Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks✓ Link96.46GAC-SNN2023-08-12
Pre-training of Lightweight Vision Transformers on Small Datasets with Minimally Scaled Images96.41ViT (lightweight, MAE pretrained)2024-02-06
Neural Architecture Search with Reinforcement Learning✓ Link96.4NAS-RL2016-11-05
Training Neural Networks with Local Error Signals✓ Link96.4VGG11B(2x) + LocalLearning + CO2019-01-20
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities✓ Link96.378ABNet-2G-R3-Combined2024-11-28
Learning Identity Mappings with Residual Gates96.35Residual Gates + WRN2016-11-04
PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions✓ Link96.32PDO-eConv (p8, 2.62M)2020-07-20
Towards Principled Design of Deep Convolutional Networks: Introducing SimpNet✓ Link96.29SimpleNetv22018-02-17
Non-convex Learning via Replica Exchange Stochastic Gradient MCMC✓ Link96.12ResNet56 with reSGHMC2020-08-12
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations✓ Link96.1Mixer-S/16- SAM2021-06-03
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities✓ Link96.088ABNet-2G-R32024-11-28
Regularizing Neural Networks via Adversarial Model Perturbation✓ Link96.03PreActResNet18 (AMP)2020-10-10
Patches Are All You Need?✓ Link96.03ConvMixer-256/82022-01-24
Preventing Manifold Intrusion with Locality: Local Mixup✓ Link95.97Local Mixup Resnet182022-01-12
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities✓ Link95.900ABNet-2G-R22024-11-28
Effect of Pre-Training Scale on Intra- and Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest Images✓ Link95.78ResNet-50x1-ACG (ImageNet-21K)2021-05-31
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective✓ Link95.66ResNet18 (FSGDM)2024-11-29
Striving for Simplicity: The All Convolutional Net✓ Link95.6ACN2014-12-21
Large-Scale Evolution of Image Classifiers✓ Link95.6Evolution ensemble2017-03-03
Benchopt: Reproducible, efficient and collaborative optimization benchmarks✓ Link95.55ResNet-182022-06-27
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities✓ Link95.536ABNet-2G-R12024-11-28
Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures✓ Link95.51SimpleNetv12016-08-22
MobileNetV2: Inverted Residuals and Linear Bottlenecks✓ Link95.50Mobile Net_Sam2018-01-13
IM-Loss: Information Maximization Loss for Spiking Neural Networks95.49IM-Loss (ResNet-19)2022-10-31
Identity Mappings in Deep Residual Networks✓ Link95.4ResNet-10012016-03-16
Non-convex Learning via Replica Exchange Stochastic Gradient MCMC✓ Link95.35ResNet32 with reSGHMC2020-08-12
Learning Class Unique Features in Fine-Grained Visual Classification95.33ResNet-18+MM+FRL2020-11-22
[]()95.32PSN (Modified PLIF Net)
Escaping the Big Data Paradigm with Compact Transformers✓ Link95.29CCT-6/3x12021-04-12
Momentum Residual Neural Networks✓ Link95.18MomentumNet2021-02-15
Context-Aware Compilation of DNN Training Pipelines across Edge and Cloud✓ Link95.16Context-Aware Pipeline2021-12-30
SRM : A Style-based Recalibration Module for Convolutional Neural Networks✓ Link95.0595.05SRM-ResNet-562019-03-26
MixMatch: A Holistic Approach to Semi-Supervised Learning✓ Link95.05MixMatch2019-05-06
Sparse Networks from Scratch: Faster Training without Losing Performance✓ Link95.04WRN-22-8 (Sparse Momentum)2019-07-10
Encoding the latent posterior of Bayesian Neural Networks for uncertainty quantification✓ Link95.02LP-BNN (ours) + cutout2020-12-04
An Enhanced Scheme for Reducing the Complexity of Pointwise Convolutions in CNNs for Image Classification Based on Interleaved Grouped Filters without Divisibility Constraints✓ Link94.95kEffNet-B0 V2 32ch + H Flip2022-09-08
Deep Polynomial Neural Networks✓ Link94.9Prodpoly2020-06-20
CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters✓ Link94.79ResNet-92022-03-29
Deep Networks with Stochastic Depth✓ Link94.77Stochastic Depth2016-03-30
GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training✓ Link94.71VGG-19 with GradInit2021-02-16
Non-convex Learning via Replica Exchange Stochastic Gradient MCMC✓ Link94.62ResNet20 with reSGHMC2020-08-12
PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions✓ Link94.62PDO-eConv (p6m,0.37M)2020-07-20
Large-Scale Evolution of Image Classifiers✓ Link94.6Evolution2017-03-03
Efficient Architecture Search by Network Transformation✓ Link94.6RL+NT2017-07-16
Convolutional Xformers for Vision✓ Link94.46Convolutional Performer for Vision (CPV)2022-01-25
How to Use Dropout Correctly on Residual Networks with Batch Normalization✓ Link94.4367PreResNet-1102023-02-13
Deep Residual Networks with Exponential Linear Unit✓ Link94.4ResNet+ELU2016-04-14
Deep Complex Networks✓ Link94.4Deep Complex2017-05-27
PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions✓ Link94.35PDO-eConv (p6,0.36M)2020-07-20
Stochastic Optimization of Plain Convolutional Neural Networks with Simple methods✓ Link94.29Stochastic Optimization of Plain Convolutional Neural Networks with Simple methods2020-01-24
All you need is a good init✓ Link94.2Fitnet4-LSUV2015-11-19
Learning local discrete features in explainable-by-design convolutional neural networks✓ Link94.150.89 MR-ExplaiNet-262024-10-31
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities✓ Link94.118ABNet-2G-R02024-11-28
Mish: A Self Regularized Non-Monotonic Activation Function✓ Link94.05ResNet 9 + Mish2019-08-23
Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree✓ Link94.0Tree+Max-Avg pooling2015-09-30
Beta-Rank: A Robust Convolutional Filter Pruning Method For Imbalanced Medical Image Analysis✓ Link93.97Beta-Rank2023-04-15
Stochastic Subsampling With Average Pooling93.861ResNet-110 (SAP)2024-09-25
On the Relationship between Self-Attention and Convolutional Layers✓ Link93.8SA quadratic embedding2019-11-08
Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks✓ Link93.75kEffNet-B0 32ch2022-06-30
Online Training Through Time for Spiking Neural Networks✓ Link93.73OTTT2022-10-09
Spatially-sparse convolutional neural networks✓ Link93.7SSCNN2014-09-22
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations✓ Link93.7NNCLR2021-04-29
Scalable Bayesian Optimization Using Deep Neural Networks✓ Link93.6Tuned CNN2015-02-19
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)✓ Link93.5Exponential Linear Units2015-11-23
Batch-normalized Maxout Network in Network✓ Link93.3BNM NiN2015-11-09
Universum Prescription: Regularization using Unlabeled Data93.3Universum Prescription2015-11-11
Competitive Multi-scale Convolution93.1CMsC2015-11-18
Distilled Gradual Pruning with Pruned Fine-tuning✓ Link92.90DGPPF-ResNet182024-02-15
Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks✓ Link92.74kMobileNet V3 Large 16ch2022-06-30
Learning Activation Functions to Improve Deep Neural Networks✓ Link92.5NiN+APL2014-12-21
Training Very Deep Networks✓ Link92.4VDN2015-07-22
A Bregman Learning Framework for Sparse Neural Networks✓ Link92.3ResNet2021-05-10
Stacked What-Where Auto-encoders✓ Link92.2SWWAE2015-06-08
FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes✓ Link92.2FlexTCN-72021-10-15
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization✓ Link92.08ReActNet-182021-04-16
Mish: A Self Regularized Non-Monotonic Activation Function✓ Link92.02ResNet v2-20 (Mish activation)2019-08-23
Context-aware deep model compression for edge cloud computing92.01Context-Aware DNN tree2020-11-29
Deeply-Supervised Nets✓ Link91.8DSN2014-09-18
BinaryConnect: Training Deep Neural Networks with binary weights during propagations✓ Link91.7BinaryConnect2015-11-02
Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities✓ Link91.7CLS-GAN2017-01-23
On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units91.5MIM2015-08-03
Spectral Representations for Convolutional Neural Networks91.4Spectral Representations for Convolutional Neural Networks2015-06-11
DLME: Deep Local-flatness Manifold Embedding✓ Link91.3DLME (ResNet-18, linear)2022-07-07
RMDL: Random Multimodel Deep Learning for Classification✓ Link91.21RMDL (30 RDLs)2018-05-03
Network In Network✓ Link91.2Network in Network2013-12-16
Trainable Activations for Image Classification✓ Link91.1ResNet-26 (Trainable Activations)2023-01-26
Trainable Activations for Image Classification✓ Link90.9ResNet-32 (Trainable Activations)2023-01-26
Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks✓ Link90.83kDenseNet-BC L100 12ch2022-06-30
Deep Networks with Internal Selective Attention through Feedback Connections90.8Deep Networks with Internal Selective Attention through Feedback Connections2014-07-11
Maxout Networks✓ Link90.65Maxout Network (k=2)2013-02-18
Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation90.65ResNet-182019-11-13
Improving Deep Neural Networks with Probabilistic Maxout Units90.6DNN+Probabilistic Maxout2013-12-20
Practical Bayesian Optimization of Machine Learning Algorithms✓ Link90.5GP EI2012-06-13
Trainable Activations for Image Classification✓ Link90.5ResNet-44 (Trainable Activations)2023-01-26
Trainable Activations for Image Classification✓ Link90.4ResNet-20 (Trainable Activations)2023-01-26
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision✓ Link90SEER (RegNet10B)2022-02-16
Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks✓ Link89.81kMobileNet 16ch2022-06-30
APAC: Augmented PAttern Classification with Neural Networks89.7APAC2015-05-13
Dynamic Routing Between Capsules✓ Link89.4ensemble of 7 models2017-10-26
Deep Convolutional Neural Networks as Generic Feature Extractors89.1DCNN+GFE2017-10-06
ImageNet Classification with Deep Convolutional Neural Networks✓ Link89DCNN2012-12-01
Trainable Activations for Image Classification✓ Link89.0ResNet-14 (Trainable Activations)2023-01-26
Multi-column Deep Neural Networks for Image Classification✓ Link88.8MCDNN2012-02-13
Empirical Evaluation of Rectified Activations in Convolutional Network✓ Link88.8RReLU2015-05-05
Trainable Activations for Image Classification✓ Link88.8ResNet-56 (Trainable Activations)2023-01-26
Fast-DENSER++: Evolving Fully-Trained Deep Artificial Neural Networks88.73F-DENSER++2019-05-08
Your Diffusion Model is Secretly a Zero-Shot Classifier✓ Link88.5Diffusion Classifier (zero-shot)2023-03-28
ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks✓ Link87.7ReNet2015-05-03
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning87.6587.650.95MOnDev-LCT-8/32024-01-22
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning87.0387.030.55MOnDev-LCT-4/32024-01-22
Efficient Convolutional Neural Networks on Raspberry Pi for Image Classification✓ Link87.03TripleNet-B2022-04-02
An Analysis of Unsupervised Pre-training in Light of Recent Advances✓ Link86.7An Analysis of Unsupervised Pre-training in Light of Recent Advances2014-12-20
ThreshNet: An Efficient DenseNet Using Threshold Mechanism to Reduce Connections✓ Link86.69ThreshNet952022-01-09
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning86.6486.640.91MOnDev-LCT-8/12024-01-22
Connection Reduction of DenseNet for Image Recognition✓ Link86.64ShortNet1-532022-08-02
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning86.6186.610.51MOnDev-LCT-4/12024-01-22
Learning in Wilson-Cowan model for metapopulation✓ Link86.59CNN+ Wilson-Cowan model RNN2024-06-24
Trainable Activations for Image Classification✓ Link86.5ResNet-8 (Trainable Activations)2023-01-26
New Pruning Method Based on DenseNet Network for Image Classification86.34ThresholdNet2021-08-28
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning86.2786.270.31MOnDev-LCT-2/12024-01-22
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning86.0486.040.35MOnDev-LCT-2/32024-01-22
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning85.7385.730.25MOnDev-LCT-1/32024-01-22
ResNet strikes back: An improved training procedure in timm✓ Link85.28cvpr_class2021-10-01
WaveMix: Multi-Resolution Token Mixing for Images✓ Link85.21WaveMix2021-09-29
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks✓ Link84.9Stochastic Pooling2013-01-16
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning84.5584.550.21MOnDev-LCT-1/12024-01-22
Improving neural networks by preventing co-adaptation of feature detectors✓ Link84.4Improving neural networks by preventing co-adaptation of feature detectors2012-07-03
Vision Xformers: Efficient Attention for Image Classification✓ Link83.36CCN2021-07-05
Vision Xformers: Efficient Attention for Image Classification✓ Link83.26CvN2021-07-05
Unsupervised Learning using Pretrained CNN and Associative Memory Bank83.1UL-Hopfield (ULH)2018-05-02
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks✓ Link82.8DCGAN2015-11-19
An Optimized Toolbox for Advanced Image Processing with Tsetlin Machine Composites✓ Link82.8TM Composites Toolbox2024-06-02
Convolutional Kernel Networks82.2CKN2014-06-12
Evaluating the Performance of TAAF for image classification models✓ Link82.0654510082.060.5551The Analog Activation Function2025-02-13
Discriminative Unsupervised Feature Learning with Convolutional Neural Networks✓ Link82Discriminative Unsupervised Feature Learning with Convolutional Neural Networks2014-12-01
How Important is Weight Symmetry in Backpropagation?✓ Link80.98Sign-symmetry2015-10-17
Personalized Federated Learning with Hidden Information on Personalized Prior80.63pFedBreD_ns_mg2022-11-19
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks✓ Link80.61 Layer K-means2015-11-19
Aggregated Pyramid Vision Transformer: Split-transform-merge Strategy for Image Recognition without Convolutions80.45APVT2022-03-02
Learning with Recursive Perceptual Representations79.7Learning with Recursive Perceptual Representations2012-12-01
Vision Xformers: Efficient Attention for Image Classification✓ Link79.50LeViP2021-07-05
[]()78.9Convolutional Deep Belief Network
PCANet: A Simple Deep Learning Baseline for Image Classification?✓ Link78.7PCANet2014-04-14
Vision Xformers: Efficient Attention for Image Classification✓ Link76.9Hybrid ViT+RoPE2021-07-05
Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural Network75.9FLSCNN2015-03-16
Vision Xformers: Efficient Attention for Image Classification✓ Link75.26Hybrid Vision Nystromformer (ViN)2021-07-05
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin Machine✓ Link75.1CTM Drop Clause2021-05-30
Vision Xformers: Efficient Attention for Image Classification✓ Link74Hybrid PiN2021-07-05
SmoothNets: Optimizing CNN architecture design for differentially private deep learning✓ Link73.5SmoothNetV12022-05-09
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data✓ Link68.3SNN2023-02-13
Vision Xformers: Efficient Attention for Image Classification✓ Link65.06Vision Nystromformer (ViN)2021-07-05
Augmented Neural ODEs✓ Link60.6ANODE2019-04-02
Efficient Adaptive Ensembling for Image Classification99.612efficient adaptive ensembling2022-06-15
Performance of Gaussian Mixture Model Classifiers on Embedded Feature Spaces✓ Link98.8DGMMC-S2024-10-17
SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers✓ Link95.74SAG-ViT2024-11-14