OpenCodePapers
adversarial-robustness-on-advglue
Adversarial Robustness
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
Show papers without code
Paper
Code
Accuracy
↕
ModelName
ReleaseDate
↕
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.6086
DeBERTa (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.5922
ALBERT (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.5682
T5 (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.5371
SMART_RoBERTa (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.5048
FreeLB (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.5021
RoBERTa (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.4603
InfoBERT (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.4169
ELECTRA (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.3369
BERT (single model)
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
✓ Link
0.3029
SMART_BERT (single model)
2021-11-04