Paper | Code | EM | F1 | Average F1 | ModelName | ReleaseDate |
---|---|---|---|---|---|---|
ByT5: Towards a token-free future with pre-trained byte-to-byte models | ✓ Link | 63.6 | 79.7 | ByT5 XXL | 2021-05-28 | |
Rethinking embedding coupling in pre-trained language models | ✓ Link | 46.9 | 63.8 | Decoupled | 2020-10-24 | |
Rethinking embedding coupling in pre-trained language models | ✓ Link | 46.2 | 63.2 | Coupled | 2020-10-24 | |
mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models | ✓ Link | 74.2 | mLUKE-E | 2021-10-15 |