OpenCodePapers

natural-language-inference-on-multinli-dev

Natural Language Inference
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeMatchedMismatchedModelNameReleaseDate
TinyBERT: Distilling BERT for Natural Language Understanding✓ Link84.584.5TinyBERT-6 67M2019-09-23
Prune Once for All: Sparse Pre-Trained Language Models✓ Link83.7484.2BERT-Large-uncased-PruneOFA (90% unstruct sparse)2021-11-10
Prune Once for All: Sparse Pre-Trained Language Models✓ Link83.4784.08BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8)2021-11-10
Prune Once for All: Sparse Pre-Trained Language Models✓ Link82.7183.67BERT-Base-uncased-PruneOFA (85% unstruct sparse)2021-11-10
Prune Once for All: Sparse Pre-Trained Language Models✓ Link81.4582.43BERT-Base-uncased-PruneOFA (90% unstruct sparse)2021-11-10
Prune Once for All: Sparse Pre-Trained Language Models✓ Link81.482.51BERT-Base-uncased-PruneOFA (85% unstruct sparse, QAT Int8)2021-11-10
Prune Once for All: Sparse Pre-Trained Language Models✓ Link81.3582.03DistilBERT-uncased-PruneOFA (85% unstruct sparse)2021-11-10
Prune Once for All: Sparse Pre-Trained Language Models✓ Link80.6881.47DistilBERT-uncased-PruneOFA (90% unstruct sparse)2021-11-10
Prune Once for All: Sparse Pre-Trained Language Models✓ Link80.6681.14DistilBERT-uncased-PruneOFA (85% unstruct sparse, QAT Int8)2021-11-10
Prune Once for All: Sparse Pre-Trained Language Models✓ Link78.880.4DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8)2021-11-10