Paper | Code | F1 | Precision | Recall | ModelName | ReleaseDate |
---|---|---|---|---|---|---|
PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts | ✓ Link | 63.8 | 63.3 | 68 | PaCE | 2023-05-24 |
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer | ✓ Link | 58.9 | 54.1 | 64.6 | T5-3B | 2019-10-23 |
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer | ✓ Link | 58.1 | 58.2 | 57.9 | T5-base | 2019-10-23 |
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | ✓ Link | 53.2 | 56.1 | 50.6 | BERT | 2018-10-11 |
ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision | ✓ Link | 52.4 | 55.4 | 58.9 | ViLT | 2021-02-05 |
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations | ✓ Link | 52.2 | 44.8 | 62.7 | ALBERT-base | 2019-09-26 |