Paper | Code | Avg | AVG | Sentence-pair Classification | Structured Prediction | Question Answering | Sentence Retrieval | ModelName | ReleaseDate |
---|---|---|---|---|---|---|---|---|---|
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA | ✓ Link | 85.5 | 91.0 | 83.8 | 77.1 | 94.4 | Turing ULR v6 | 2021-06-30 | |
[]() | 85.0 | 90.4 | 83.1 | 76.3 | 94.4 | MShenNonG | |||
[]() | 85.0 | 90.4 | 83.1 | 76.3 | 94.4 | MShenNonG+TDT | |||
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA | ✓ Link | 84.5 | 90.3 | 81.7 | 76.3 | 93.7 | Turing ULR v5 | 2021-06-30 | |
[]() | 84.1 | 90.1 | 81.4 | 75.0 | 94.2 | CoFe | |||
[]() | 83.7 | 90.0 | 81.4 | 74.3 | 93.7 | Turing ULR v5 (XLM-E) | |||
[]() | 82.2 | 89.3 | 75.5 | 75.2 | 92.4 | InfoXLM-XFT | |||
[]() | 82.0 | 89.2 | 74.6 | 75.2 | 92.4 | Ensemble-Distil-XFT (ED-XFT) | |||
[]() | 82.0 | 89.0 | 76.7 | 73.4 | 93.3 | VECO | |||
[]() | 82.0 | 89.0 | 76.7 | 73.4 | 93.3 | VECO + HICTL | |||
[]() | 81.7 | 88.3 | 80.6 | 71.9 | 90.8 | Polyglot | |||
[]() | 81.6 | 88.4 | 76.2 | 72.5 | 93.7 | Unicoder+ZCode | |||
[]() | 81.6 | 88.4 | 76.2 | 72.5 | 93.7 | Unicoder + ZCode | |||
ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora | ✓ Link | 80.9 | 87.9 | 75.6 | 72.3 | 91.9 | ERNIE-M | 2020-12-31 | |
[]() | 80.8 | 89.0 | 74.4 | 71.9 | 92.6 | HiCTL | |||
InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training | ✓ Link | 80.7 | 88.8 | 75.4 | 72.9 | 89.3 | T-ULRv2 + StableTune | 2020-07-15 | |
[]() | 79.9 | 88.2 | 74.6 | 71.7 | 89.0 | Anonymous3 | |||
FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding | ✓ Link | 77.0 | 87.5 | 71.9 | 68.5 | 84.4 | FILTER | 2020-09-10 | |
[]() | 76.5 | 86.3 | 90.8 | 59.7 | 77.5 | Creative | |||
English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too | 73.5 | 83.9 | 69.4 | 67.2 | 76.5 | X-STILTs | 2020-05-26 | ||
[]() | 56.1 | 84.1 | 73.3 | 68.6 | NA | RemBERT | |||
[]() | 53.1 | 75.3 | 66.9 | 52.5 | 18.0 | Anonymous5 | |||
mT5: A massively multilingual pre-trained text-to-text transformer | ✓ Link | 40.9 | 89.8 | NA | 73.6 | NA | mT5 | 2020-10-22 | |
[]() | 39.3 | 44.2 | 0.0 | 65.5 | 34.5 | Anonymous6 | |||
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalisation | ✓ Link | 59.6 | 73.7 | 66.3 | 53.8 | 47.7 | mBERT |