OpenCodePapers
question-answering-on-squad20-dev
Question Answering
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
Show papers without code
Paper
Code
F1
↕
EM
↕
ModelName
ReleaseDate
↕
XLNet: Generalized Autoregressive Pretraining for Language Understanding
✓ Link
90.6
87.9
XLNet (single model)
2019-06-19
Dice Loss for Data-imbalanced NLP Tasks
✓ Link
89.51
87.65
XLNet+DSC
2019-11-07
RoBERTa: A Robustly Optimized BERT Pretraining Approach
✓ Link
89.4
86.5
RoBERTa (no data aug)
2019-07-26
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
✓ Link
88.1
85.1
ALBERT xxlarge
2019-09-26
SG-Net: Syntax-Guided Machine Reading Comprehension
✓ Link
87.9
85.1
SG-Net
2019-08-14
SpanBERT: Improving Pre-training by Representing and Predicting Spans
✓ Link
86.8
SpanBERT
2019-07-24
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
✓ Link
85.9
83.1
ALBERT xlarge
2019-09-26
Semantics-aware BERT for Language Understanding
✓ Link
83.6
80.9
SemBERT large
2019-09-05
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
✓ Link
82.1
79.0
ALBERT large
2019-09-26
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
✓ Link
79.1
76.1
ALBERT base
2019-09-26
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
74.8
72.3
RMR + ELMo (Model-III)
2018-08-17
U-Net: Machine Reading Comprehension with Unanswerable Questions
✓ Link
74.0
70.3
U-Net
2018-10-12
TinyBERT: Distilling BERT for Natural Language Understanding
✓ Link
73.4
69.9
TinyBERT-6 67M
2019-09-23