OpenCodePapers

common-sense-reasoning-on-swag

Common Sense Reasoning
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeTestDevModelNameReleaseDate
DeBERTa: Decoding-enhanced BERT with Disentangled Attention✓ Link90.8DeBERTalarge2020-06-05
RoBERTa: A Robustly Optimized BERT Pretraining Approach✓ Link89.9RoBERTa2019-07-26
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding✓ Link86.386.6BERT-LARGE2018-10-11
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference59.259.1ESIM + ELMo2018-08-16
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference52.751.9ESIM + GloVe2018-08-16