OpenCodePapers
natural-language-inference-on-scitail
Natural Language Inference
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
Show papers without code
Paper
Code
Accuracy
↕
Dev Accuracy
↕
% Dev Accuracy
↕
% Test Accuracy
↕
ModelName
ReleaseDate
↕
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
✓ Link
96.8
CA-MTL
2020-09-19
Multi-Task Deep Neural Networks for Natural Language Understanding
✓ Link
94.1
MT-DNN
2019-01-31
[]()
88.3
Finetuned Transformer LM
Improving Language Understanding by Generative Pre-Training
✓ Link
88.3
Finetuned Transformer LM
2018-06-11
Sentence Embeddings in NLI with Iterative Refinement Encoders
✓ Link
86.0
Hierarchical BiLSTM Max Pooling
2018-08-27
Simple and Effective Text Matching with Richer Alignment Features
✓ Link
86.0
RE2
2019-08-01
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference
83.3
CAFE
2017-12-30
SplitEE: Early Exit in Deep Neural Networks with Split Computing
✓ Link
78.9
SplitEE-S
2023-09-17
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
✓ Link
96.1
MT-DNN-SMART_100%ofTrainingData
2019-11-08
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
✓ Link
91.3
MT-DNN-SMART_10%ofTrainingData
2019-11-08
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
✓ Link
88.6
MT-DNN-SMART_1%ofTrainingData
2019-11-08
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
✓ Link
82.3
MT-DNN-SMART_0.1%ofTrainingData
2019-11-08
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
✓ Link
96.6
95.2
MT-DNN-SMARTLARGEv0
2019-11-08