OpenCodePapers
question-answering-on-quora-question-pairs
Question Answering
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
Show papers without code
Paper
Code
Accuracy
↕
ModelName
ReleaseDate
↕
XLNet: Generalized Autoregressive Pretraining for Language Understanding
✓ Link
92.3%
XLNet (single model)
2019-06-19
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
✓ Link
92.3%
DeBERTa (large)
2020-06-05
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
✓ Link
90.5%
ALBERT
2019-09-26
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
✓ Link
90.4%
T5-11B
2019-10-23
CLEAR: Contrastive Learning for Sentence Representation
90.3%
MLM+ subs+ del-span
2020-12-31
RoBERTa: A Robustly Optimized BERT Pretraining Approach
✓ Link
90.2%
RoBERTa (ensemble)
2019-07-26
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
✓ Link
90.1%
ERNIE 2.0 Large
2019-07-29
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
✓ Link
90.1%
ELECTRA
2020-03-23
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
✓ Link
89.9%
T5-Large 770M
2019-10-23
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
✓ Link
89.8%
ERNIE 2.0 Base
2019-07-29
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
✓ Link
89.7%
T5-3B
2019-10-23
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
✓ Link
89.4%
T5-Base
2019-10-23
Simple and Effective Text Matching with Richer Alignment Features
✓ Link
89.2 %
RE2
2019-08-01
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
✓ Link
89.2%
DistilBERT 66M
2019-10-02
Big Bird: Transformers for Longer Sequences
✓ Link
88.6%
BigBird
2020-07-28
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
✓ Link
88.0%
T5-Small
2019-10-23
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
✓ Link
83.03%
SWEM-concat
2018-05-24
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
✓ Link
80.3%
SqueezeBERT
2020-06-19
How to Train BERT with an Academic Budget
✓ Link
70.7
24hBERT
2021-04-15