Paper | Code | Accuracy | ModelName | ReleaseDate |
---|---|---|---|---|
Finetuned Language Models Are Zero-Shot Learners | ✓ Link | 78.4 | FLAN 137B (zero-shot) | 2021-09-03 |
Finetuned Language Models Are Zero-Shot Learners | ✓ Link | 78.2 | FLAN 137B (few-shot, k=16) | 2021-09-03 |
LLaMA: Open and Efficient Foundation Language Models | ✓ Link | 60.2 | LLaMA 65B (zero-shot) | 2023-02-27 |
LLaMA: Open and Efficient Foundation Language Models | ✓ Link | 58.6 | LLaMA 33B (zero-shot) | 2023-02-27 |
Language Models are Few-Shot Learners | ✓ Link | 57.6 | GPT-3 175B (zero-shot) | 2020-05-28 |
LLaMA: Open and Efficient Foundation Language Models | ✓ Link | 57.2 | LLaMA 7B (zero-shot) | 2023-02-27 |
LLaMA: Open and Efficient Foundation Language Models | ✓ Link | 56.4 | LLaMA 13B (zero-shot) | 2023-02-27 |
PaLM: Scaling Language Modeling with Pathways | ✓ Link | 53.4 | PaLM 540B (zero-shot) | 2022-04-05 |
PaLM: Scaling Language Modeling with Pathways | ✓ Link | 50.4 | PaLM 62B (zero-shot) | 2022-04-05 |