Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization | ✓ Link | 84.6% | | | DHO (ViT-Large) | 2025-05-12 |
Learning Customized Visual Models with Retrieval-Augmented Knowledge | ✓ Link | 81.6% | | | REACT (ViT-Large) | 2023-01-17 |
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization | ✓ Link | 81.6% | | | DHO (ViT-Base) | 2025-05-12 |
Meta Co-Training: Two Views are Better than One | ✓ Link | 80.7% | | | Meta Co-Training | 2023-11-29 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 80.7% | | | Semi-SST (ViT-Huge) | 2025-05-31 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 80.3% | | | Super-SST (ViT-Huge) | 2025-05-31 |
Semi-supervised Vision Transformers at Scale | ✓ Link | 80% | 93.1 | | Semi-ViT (ViT-Huge) | 2022-08-11 |
Semi-supervised Vision Transformers at Scale | ✓ Link | 77.3% | | | Semi-ViT (ViT-Large) | 2022-08-11 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 76.9% | | | Super-SST (ViT-Small distilled) | 2025-05-31 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 76.6% | 93.4% | | SimCLRv2 self-distilled (ResNet-152 x3, SK) | 2020-06-17 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 75.9% | 93.0% | | SimCLRv2 distilled (ResNet-50 x2, SK) | 2020-06-17 |
Masked Siamese Networks for Label-Efficient Learning | ✓ Link | 75.7% | | | MSN (ViT-B/4) | 2022-04-14 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 74.9% | 92.3% | | SimCLRv2 (ResNet-152 x3, SK) | 2020-06-17 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 73.9% | 91.5% | | SimCLRv2 distilled (ResNet-50) | 2020-06-17 |
SimMatchV2: Semi-Supervised Learning with Graph Consistency | ✓ Link | 71.9% | | | SimMatchV2 (ResNet-50) | 2023-08-13 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 71.4% | | | Semi-SST (ViT-Small) | 2025-05-31 |
Debiased Learning from Naturally Imbalanced Pseudo-Labels | ✓ Link | 71.3% | | | DebiasPL (ResNet-50) | 2022-01-05 |
Bootstrap your own latent: A new approach to self-supervised Learning | ✓ Link | 71.2% | 89.5% | | BYOL (ResNet-200 x2) | 2020-06-13 |
Semi-supervised Vision Transformers at Scale | ✓ Link | 71% | | | Semi-ViT (ViT-Base) | 2022-08-11 |
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning | | 70.4% | | | Super-SST (ViT-Small) | 2025-05-31 |
Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples | ✓ Link | 69.9% | | | PAWS (ResNet-50 4x) | 2021-04-28 |
Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples | ✓ Link | 69.6% | | | PAWS (ResNet-50 2x) | 2021-04-28 |
Bootstrap your own latent: A new approach to self-supervised Learning | ✓ Link | 69.1% | 87.9% | | BYOL (ResNet-50 x4) | 2020-06-13 |
Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector | ✓ Link | 68.6% | 87.6 | | SimMatch + EPASS (ResNet-50) | 2023-10-24 |
Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector | ✓ Link | 67.4% | 87.3 | | CoMatch + EPASS (ResNet-50) | 2023-10-24 |
Self-Supervised Learning by Estimating Twin Class Distributions | ✓ Link | 67.2% | 88.2% | | TWIST (ResNet-50 x2) | 2021-10-14 |
SimMatch: Semi-supervised Learning with Similarity Matching | ✓ Link | 67.2% | | | SimMatch (ResNet-50) | 2022-03-14 |
CoMatch: Semi-supervised Learning with Contrastive Graph Regularization | ✓ Link | 67.1% | 87.1% | | CoMatch (w. MoCo v2) | 2020-11-23 |
Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples | ✓ Link | 66.5% | | | PAWS (ResNet-50) | 2021-04-28 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 66.3% | 87.4% | | SimCLRv2 (ResNet-50 ×2) | 2020-06-17 |
Weakly Supervised Contrastive Learning | ✓ Link | 65.0% | 86.3% | | WCL (ResNet-50) | 2021-10-10 |
Boosting Contrastive Self-Supervised Learning with False Negative Cancellation | ✓ Link | 63.7% | 85.3% | | FNC (ResNet-50) | 2020-11-23 |
A Simple Framework for Contrastive Learning of Visual Representations | ✓ Link | 63.0% | 85.8% | | SimCLR (ResNet-50 4×) | 2020-02-13 |
Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning | ✓ Link | 63% | | | FixMatch-EMAN | 2021-01-21 |
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision | ✓ Link | 62.4% | | | SEER (RegNet10B) | 2022-02-16 |
Bootstrap your own latent: A new approach to self-supervised Learning | ✓ Link | 62.2% | 84.1% | | BYOL (ResNet-50 x2) | 2020-06-13 |
iBOT: Image BERT Pre-Training with Online Tokenizer | ✓ Link | 61.9% | | | iBOT (ViT-S/16) | 2021-11-15 |
Self-supervised Pretraining of Visual Features in the Wild | ✓ Link | 60.5% | | | SEER Large (RegNetY-256GF) | 2021-03-02 |
SemiReward: A General Reward Model for Semi-supervised Learning | ✓ Link | 59.64% | | | SemiReward | 2023-10-04 |
A Simple Framework for Contrastive Learning of Visual Representations | ✓ Link | 58.5% | 83.0% | | SimCLR (ResNet-50 2×) | 2020-02-13 |
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet? | ✓ Link | 58.1% | 81.3 | | RELICv2 | 2022-01-13 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | ✓ Link | 57.9% | 82.5% | | SimCLRv2 (ResNet-50) | 2020-06-17 |
Self-supervised Pretraining of Visual Features in the Wild | ✓ Link | 57.5% | | | SEER Small (RegNetY-128GF) | 2021-03-02 |
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations | ✓ Link | 56.4% | 80.7 | | NNCLR (ResNet-50) | 2021-04-29 |
VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution | ✓ Link | 55.8 | 81.0 | | I-VNE+ (ResNet-50) | 2023-04-04 |
Barlow Twins: Self-Supervised Learning via Redundancy Reduction | ✓ Link | 55% | 79.2 | | Barlow Twins (ResNet-50) | 2021-03-04 |
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning | ✓ Link | 54.8% | 79.4% | | VICREG (Resnet-50) | 2021-05-11 |
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments | ✓ Link | 53.9% | 78.5 | | SwAV (ResNet-50) | 2020-06-17 |
Bootstrap your own latent: A new approach to self-supervised Learning | ✓ Link | 53.2% | 78.4% | | BYOL (ResNet-50) | 2020-06-13 |
[]() | | 52.7% | 77.9% | | CPC v2 (ResNet-161) | |
SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations | ✓ Link | 50.8% | 77.5% | 24M | SynCo (ResNet-50) 800ep | 2024-10-03 |
A Simple Framework for Contrastive Learning of Visual Representations | ✓ Link | 48.3% | 75.5% | | SimCLR (ResNet-50) | 2020-02-13 |
SCAN: Learning to Classify Images without Labels | ✓ Link | 39.90% | 60.0% | | SCAN (ResNet-50|Unsupervised) | 2020-05-25 |
OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised Learning | ✓ Link | | 82.9% | | OBoW (ResNet-50) | 2020-12-21 |
Prototypical Contrastive Learning of Unsupervised Representations | ✓ Link | | 75.6% | | PCL (ResNet-50) | 2020-05-11 |
Representation Learning with Contrastive Predictive Coding | ✓ Link | | 64.03% | | CPC | 2018-07-10 |
Large Scale Adversarial Representation Learning | ✓ Link | | 55.2% | | BigBiGAN (RevNet-50 ×4, BN+CReLU) | 2019-07-04 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 53.37% | | Rotation (joint training) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 51.56% | | Pseudolabeling | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 47.02% | | Exemplar (joint training) | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 46.96% | | VAT + Entropy Minimization | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 45.11% | | Rotation | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 44.90% | | Exemplar | 2019-05-09 |
S4L: Self-Supervised Semi-Supervised Learning | ✓ Link | | 44.05% | | VAT | 2019-05-09 |
Unsupervised Feature Learning via Non-Parametric Instance Discrimination | ✓ Link | | 39.20% | | Instance Discrimination (ResNet-50) | 2018-06-01 |