OpenCodePapers

natural-language-inference-on-scitail

Natural Language Inference
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeAccuracyDev Accuracy% Dev Accuracy% Test AccuracyModelNameReleaseDate
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data✓ Link96.8CA-MTL2020-09-19
Multi-Task Deep Neural Networks for Natural Language Understanding✓ Link94.1MT-DNN2019-01-31
[]()88.3Finetuned Transformer LM
Improving Language Understanding by Generative Pre-Training✓ Link88.3Finetuned Transformer LM2018-06-11
Sentence Embeddings in NLI with Iterative Refinement Encoders✓ Link86.0Hierarchical BiLSTM Max Pooling2018-08-27
Simple and Effective Text Matching with Richer Alignment Features✓ Link86.0RE22019-08-01
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference83.3CAFE2017-12-30
SplitEE: Early Exit in Deep Neural Networks with Split Computing✓ Link78.9SplitEE-S2023-09-17
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization✓ Link96.1MT-DNN-SMART_100%ofTrainingData2019-11-08
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization✓ Link91.3MT-DNN-SMART_10%ofTrainingData2019-11-08
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization✓ Link88.6MT-DNN-SMART_1%ofTrainingData2019-11-08
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization✓ Link82.3MT-DNN-SMART_0.1%ofTrainingData2019-11-08
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization✓ Link96.695.2MT-DNN-SMARTLARGEv02019-11-08