OpenCodePapers

sparse-learning-on-imagenet

Sparse Learning
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeTop-1 AccuracyModelNameReleaseDate
Rigging the Lottery: Making All Tickets Winners✓ Link77.1Resnet-50: 80% Sparse2019-11-25
Rigging the Lottery: Making All Tickets Winners✓ Link76.4Resnet-50: 90% Sparse2019-11-25
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration✓ Link76Resnet-50: 80% Sparse 100 epochs2021-06-19
Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training✓ Link75.84Resnet-50: 80% Sparse 100 epochs2021-02-04
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration✓ Link74.5Resnet-50: 90% Sparse 100 epochs2021-06-19
Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training✓ Link73.82Resnet-50: 90% Sparse 100 epochs2021-02-04
Rigging the Lottery: Making All Tickets Winners✓ Link71.9MobileNet-v1: 75% Sparse2019-11-25
Rigging the Lottery: Making All Tickets Winners✓ Link68.1MobileNet-v1: 90% Sparse2019-11-25
Sparse learning of stochastic dynamic equations✓ Link6SINDy2017-12-06