OpenCodePapers

data-free-knowledge-distillation-on-qnli

Data-free Knowledge Distillation
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeAccuracyModelNameReleaseDate
GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation91.7GOLD (T5-base)2024-03-28
ZeroGen: Efficient Zero-shot Learning via Dataset Generation✓ Link88.5ZeroGen (T5-base)2022-02-16
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback✓ Link85.9ProGen (T5-base)2022-10-22
Prompt2Model: Generating Deployable Models from Natural Language Instructions✓ Link62.2Prompt2Model (T5-base)2023-08-23