Paper | Code | METEOR | BLEU | BERT | BLEURT | Mover | TER | FactSpotter | ModelName | ReleaseDate |
---|---|---|---|---|---|---|---|---|---|---|
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation | ✓ Link | 0.4074 | 48.47 | 0.9505 | 0.6749 | 96.65 | T5B Baseline | 2023-10-25 | ||
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation | ✓ Link | 0.4072 | 48.37 | 0.9505 | 0.6743 | 97.60 | FactT5B | 2023-10-25 | ||
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation | ✓ Link | 0.4043 | 47.51 | 0.9492 | 0.6733 | 95.86 | JointGT Baseline | 2023-10-25 | ||
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation | ✓ Link | 0.4032 | 47.39 | 0.9492 | 0.6726 | 97.25 | FactJointGT | 2023-10-25 | ||
HTLM: Hyper-Text Pre-Training and Prompting of Language Models | 0.39 | 47.2 | 0.94 | 0.4 | 0.51 | 0.44 | HTLM (fine-tuning) | 2021-07-14 | ||
HTLM: Hyper-Text Pre-Training and Prompting of Language Models | 0.39 | 47.0 | 0.94 | 0.4 | 0.51 | 0.46 | GPT-2-Large (fine-tuning) | 2021-07-14 |