OpenCodePapers

question-answering-on-danetqa

Question Answering
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeAccuracyModelNameReleaseDate
[]()0.917Golden Transformer
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark✓ Link0.915Human Benchmark2020-10-29
[]()0.82ruRoberta-large finetune
[]()0.773ruBert-large finetune
[]()0.732ruT5-base-finetune
[]()0.712ruBert-base finetune
[]()0.711ruT5-large-finetune
[]()0.697SBERT_Large_mt_ru_finetuning
[]()0.675SBERT_Large
mT5: A massively multilingual pre-trained text-to-text transformer✓ Link0.657MT5 Large2020-10-22
Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks0.642heuristic majority2021-05-03
[]()0.639RuBERT plain
[]()0.637YaLM 1.0B few-shot
[]()0.634RuGPT3Medium
[]()0.624Multilingual Bert
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark✓ Link0.621Baseline TF-IDF1.12020-10-29
[]()0.61RuGPT3Small
[]()0.606RuBERT conversational
[]()0.604RuGPT3Large
[]()0.59RuGPT3XL few-shot
Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks0.52Random weighted2021-05-03
Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks0.503majority_class2021-05-03