Paper | Code | Accuracy | ModelName | ReleaseDate |
---|---|---|---|---|
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark | ✓ Link | 0.805 | Human Benchmark | 2020-10-29 |
[]() | 0.735 | ruT5-large-finetune | ||
[]() | 0.729 | RuBERT conversational | ||
[]() | 0.726 | RuBERT plain | ||
[]() | 0.715 | ruRoberta-large finetune | ||
[]() | 0.706 | ruBert-base finetune | ||
[]() | 0.69 | Multilingual Bert | ||
[]() | 0.682 | ruT5-base-finetune | ||
[]() | 0.682 | ruBert-large finetune | ||
[]() | 0.657 | SBERT_Large_mt_ru_finetuning | ||
[]() | 0.654 | SBERT_Large | ||
[]() | 0.647 | RuGPT3Large | ||
[]() | 0.642 | RuGPT3Medium | ||
[]() | 0.633 | MT5 Large | ||
Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks | 0.595 | heuristic majority | 2021-05-03 | |
[]() | 0.587 | Golden Transformer | ||
[]() | 0.587 | YaLM 1.0B few-shot | ||
Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks | 0.587 | majority_class | 2021-05-03 | |
[]() | 0.57 | RuGPT3Small | ||
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark | ✓ Link | 0.57 | Baseline TF-IDF1.1 | 2020-10-29 |
[]() | 0.565 | RuGPT3XL few-shot | ||
Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks | 0.528 | Random weighted | 2021-05-03 |