Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization | ✓ Link | 85.9% | | | DHO (ViT-Large) | 2025-05-12 |
Meta Co-Training: Two Views are Better than One | ✓ Link | 85.8% | | | Meta Co-Training | 2023-11-29 |
Learning Customized Visual Models with Retrieval-Augmented Knowledge | ✓ Link | 85.1% | | | REACT (ViT-Large) | 2023-01-17 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 84.9% | | | Semi-SST (ViT-Huge) | 2025-05-31 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 84.8% | | | Super-SST (ViT-Huge) | 2025-05-31 |
Semi-supervised Vision Transformers at Scale | ✓ Link | 84.3% | 96.6% | | Semi-ViT (ViT-Huge) | 2022-08-11 |
Semi-supervised Vision Transformers at Scale | ✓ Link | 83.3% | | | Semi-ViT (ViT-Large) | 2022-08-11 |
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization | ✓ Link | 82.8% | | | DHO (ViT-Base) | 2025-05-12 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 80.9% | 95.5% | | SimCLRv2 self-distilled (ResNet-152 x3, SK) | 2020-06-17 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 80.3% | | | Super-SST (ViT-Small distilled) | 2025-05-31 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 80.2% | 95.0% | | SimCLRv2 distilled (ResNet-50 x2, SK) | 2020-06-17 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 80.1% | 95.0% | | SimCLRv2 (ResNet-152 x3, SK) | 2020-06-17 |
Semi-supervised Vision Transformers at Scale | ✓ Link | 79.7% | | | Semi-ViT (ViT-Base) | 2022-08-11 |
Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples | ✓ Link | 79.0% | | | PAWS (ResNet-50 4x) | 2021-04-28 |
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision | ✓ Link | 78.8% | | | SEER (RegNet10B) | 2022-02-16 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 78.6% | | | Semi-SST (ViT-Small) | 2025-05-31 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 78.3% | | | Super-SST (ViT-Small) | 2025-05-31 |
Self-supervised Pretraining of Visual Features in the Wild | ✓ Link | 77.9% | | | SEER Large (RegNetY-256GF) | 2021-03-02 |
Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples | ✓ Link | 77.8% | | | PAWS (ResNet-50 2x) | 2021-04-28 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 77.5% | 93.4% | | SimCLRv2 distilled (ResNet-50) | 2020-06-17 |
Semi-supervised Vision Transformers at Scale | ✓ Link | 77.1% | | | Semi-ViT (ViT-Small) | 2022-08-11 |
Self-supervised Pretraining of Visual Features in the Wild | ✓ Link | 76.7% | | | SEER Small (RegNetY-128GF) | 2021-03-02 |
SimMatchV2: Semi-Supervised Learning with Graph Consistency | ✓ Link | 76.2% | | | SimMatchV2 (ResNet-50) | 2023-08-13 |
Semi-Supervised Vision Transformers | ✓ Link | 75.5% | | | Semiformer (ViT-S + Conv) | 2021-11-22 |
Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples | ✓ Link | 75.5% | | | PAWS (ResNet-50) | 2021-04-28 |
Self-Supervised Learning by Estimating Twin Class Distributions | ✓ Link | 75.3% | 92.8% | | TWIST (ResNet-50 x2) | 2021-10-14 |
Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector | ✓ Link | 75.3% | 92.6 | | SimMatch + EPASS (ResNet-50) | 2023-10-24 |
SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning | ✓ Link | 75.2% | 91.9 | | SequenceMatch (ResNet-50) | 2023-10-24 |
SimMatch: Semi-supervised Learning with Similarity Matching | ✓ Link | 74.4% | | | SimMatch (ResNet-50) | 2022-03-14 |
Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector | ✓ Link | 74.1% | 91.5 | | CoMatch + EPASS (ResNet-50) | 2023-10-24 |
Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning | ✓ Link | 74% | | | FixMatch-EMAN | 2021-01-21 |
Milking CowMask for Semi-Supervised Image Classification | ✓ Link | 73.94% | 91.24% | | CowMix (ResNet-152) | 2020-03-26 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 73.9% | 91.9% | | SimCLRv2 (ResNet-50 x2) | 2020-06-17 |
Meta Pseudo Labels | ✓ Link | 73.89% | 91.38% | | Meta Pseudo Labels (ResNet-50) | 2020-03-23 |
CoMatch: Semi-supervised Learning with Contrastive Graph Regularization | ✓ Link | 73.7% | 91.4% | | CoMatch (w. MoCo v2) | 2020-11-23 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | 73.21% | 91.23% | | S4L-MOAM (ResNet-50 4×) | 2019-05-09 |
Data-Efficient Image Recognition with Contrastive Predictive Coding | ✓ Link | 73.1% | 91.2% | | CPC v2 (ResNet-161) | 2019-05-22 |
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet? | ✓ Link | 72.4% | 91.2% | | RELICv2 (ResNet-50) | 2022-01-13 |
Weakly Supervised Contrastive Learning | ✓ Link | 72.0% | 91.2% | | WCL (ResNet-50) | 2021-10-10 |
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations | ✓ Link | 69.8% | 89.3 | | NNCLR (ResNet-50) | 2021-04-29 |
Barlow Twins: Self-Supervised Learning via Redundancy Reduction | ✓ Link | 69.7% | 89.3 | | Barlow Twins (ResNet-50) | 2021-03-04 |
VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution | ✓ Link | 69.1 | 89.9 | | I-VNE+ (ResNet-50) | 2023-04-04 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 68.4% | 89.2% | | SimCLRv2 (ResNet-50) | 2020-06-17 |
SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations | ✓ Link | 66.6% | 88.0% | 24M | SynCo (ResNet-50) 800ep | 2024-10-03 |
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling | ✓ Link | 64.79% | 86.04% | | FlexMatch | 2021-10-15 |
Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning | ✓ Link | 63.52% | 83.58% | | Dual Student | 2019-09-03 |
NP-Match: When Neural Processes meet Semi-Supervised Learning | ✓ Link | 58.22% | | | NP-Match(ResNet-50) | 2022-07-03 |
A Simple Framework for Contrastive Learning of Visual Representations | ✓ Link | | 92.6% | | SimCLR (ResNet-50 4×) | 2020-02-13 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 91.23% | | Rotation + VAT + Ent. Min. | 2019-05-09 |
A Simple Framework for Contrastive Learning of Visual Representations | ✓ Link | | 91.2% | | SimCLR (ResNet-50 2×) | 2020-02-13 |
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | ✓ Link | | 90.89% | | Mean Teacher (ResNeXt-152) | 2017-03-06 |
OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised Learning | ✓ Link | | 90.7% | | OBoW (ResNet-50) | 2020-12-21 |
Repetitive Reprediction Deep Decipher for Semi-Supervised Learning | ✓ Link | | 90.48% | | R2-D2 (ResNet-18) | 2019-08-09 |
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence | ✓ Link | | 89.13% | | FixMatch | 2020-01-21 |
Unsupervised Data Augmentation for Consistency Training | ✓ Link | | 88.52 | | UDA | 2019-04-29 |
A Simple Framework for Contrastive Learning of Visual Representations | ✓ Link | | 87.8% | | SimCLR (ResNet-50) | 2020-02-13 |
Representation Learning with Contrastive Predictive Coding | ✓ Link | | 84.88% | | CPC | 2018-07-10 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 83.82% | | S4L-Rotation (ResNet-50) | 2019-05-09 |
Semi-supervised Sequence-to-sequence ASR using Unpaired Speech and Text | | | 83.82% | | Rotation (joint training) | 2019-04-30 |
Self-Supervised Learning of Pretext-Invariant Representations | ✓ Link | | 83.8% | | PIRL (ResNet-50) | 2019-12-04 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 83.72% | | S4L-Exemplar (ResNet-50) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 83.72% | | Exemplar (joint training) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 83.39% | | VAT + Entropy Minimization (ResNet-50) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 83.39% | | VAT + Entropy Minimization | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 82.78% | | VAT (ResNet-50) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 82.78% | | VAT | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 82.41% | | Pseudolabeling (ResNet-50) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 82.41% | | Pseudolabeling | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 81.01% | | Exemplar Fine-tuned (ResNet-50) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 81.01% | | Exemplar | 2019-05-09 |
Large Scale Adversarial Representation Learning | ✓ Link | | 78.8% | | BigBiGAN (RevNet-50 ×4, BN+CReLU) | 2019-07-04 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 78.53% | | Rotation Fine-tuned (ResNet-50) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 78.53% | | Rotation | 2019-05-09 |
Unsupervised Feature Learning via Non-Parametric Instance Discrimination | ✓ Link | | 77.40% | | Instance Discrimination | 2018-06-01 |
Unsupervised Feature Learning via Non-Parametric Instance Discrimination | ✓ Link | | 77.4% | | InstDisc (ResNet-50) | 2018-06-01 |