Paper | Code | BLEU | METEOR | FactSpotter | ModelName | ReleaseDate |
---|---|---|---|---|---|---|
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation | ✓ Link | 48.74 | 0.4074 | 96.65 | T5B Baseline | 2023-10-25 |
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation | ✓ Link | 48.37 | 0.4072 | 97.60 | FactT5B | 2023-10-25 |
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation | ✓ Link | 47.51 | 0.4043 | 0.9586 | JointGT Baseline | 2023-10-25 |
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation | ✓ Link | 47.39 | 0.4032 | 97.25 | FactJointGT | 2023-10-25 |
Control Prefixes for Parameter-Efficient Text Generation | ✓ Link | 0.411 | Control Prefixes (T5-large) | 2021-10-15 | ||
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics | 0.115 | T5 | 2021-02-02 | |||
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics | 0.107 | BART | 2021-02-02 |