Finetuned Language Models Are Zero-Shot Learners | ✓ Link | 40.7 | | FLAN 137B (few-shot, k=11) | 2021-09-03 |
Finetuned Language Models Are Zero-Shot Learners | ✓ Link | 38.9 | | FLAN 137B (zero-shot) | 2021-09-03 |
Edinburgh Neural Machine Translation Systems for WMT 16 | ✓ Link | 38.6 | | Attentional encoder-decoder + BPE | 2016-06-09 |
Linguistic Input Features Improve Neural Machine Translation | ✓ Link | 32.9 | | Linguistic Input Features | 2016-06-09 |
Unsupervised Statistical Machine Translation | ✓ Link | 23.05 | | SMT + iterative backtranslation (unsupervised) | 2018-09-04 |
Unsupervised Neural Machine Translation with Weight Sharing | ✓ Link | 14.62 | | Unsupervised NMT + weight-sharing | 2018-04-24 |
Unsupervised Machine Translation Using Monolingual Corpora Only | ✓ Link | 13.33 | | Unsupervised S2S with attention | 2017-10-31 |
Exploiting Monolingual Data at Scale for Neural Machine Translation | | | 47.5 | Exploiting Mono at Scale (single) | 2019-11-01 |