Paper | Code | KILT-AC | R-Prec | Recall@5 | Accuracy | ModelName | ReleaseDate |
---|---|---|---|---|---|---|---|
Re2G: Retrieve, Rerank, Generate | ✓ Link | 78.53 | 88.92 | 92.52 | 89.55 | Re2G | 2022-07-13 |
[]() | 71.28 | 81.45 | 89.56 | 89.54 | intersect | ||
[]() | 65.68 | 74.77 | 87.89 | 88.99 | Wikipedia | ||
[]() | 64.41 | 75.6 | 84.95 | 85.58 | KGI | ||
[]() | 63.94 | 74.48 | 87.52 | 86.32 | Multitask DPR + BART | ||
[]() | 58.58 | 72.93 | 73.52 | 69.68 | BERT + DPR | ||
KILT: a Benchmark for Knowledge Intensive Language Tasks | ✓ Link | 53.45 | 61.94 | 75.55 | 86.31 | RAG | 2020-09-04 |
[]() | 47.68 | 55.33 | 74.29 | 86.74 | BART + DPR | ||
[]() | 41.88 | 49.24 | 70.16 | 66.1 | NSMN | ||
[]() | 0.0 | 84.45 | 88.62 | 0.0 | TABi | ||
[]() | 0.0 | 84.07 | 89.41 | 0.0 | chriskuei | ||
[]() | 0.0 | 83.64 | 88.15 | 0.0 | GENRE | ||
[]() | 0.0 | 74.48 | 87.52 | 0.0 | Multi-task DPR | ||
[]() | 0.0 | 0.0 | 0.0 | 89.12 | Sphere | ||
[]() | 0.0 | 0.0 | 0.0 | 88.45 | aa_evalai | ||
[]() | 0.0 | 0.0 | 0.0 | 78.93 | BART | ||
KILT: a Benchmark for Knowledge Intensive Language Tasks | ✓ Link | 0.0 | 0.0 | 0.0 | 76.3 | T5-base | 2020-09-04 |
[]() | 0.0 | 0.0 | 0.0 | 76.26 | GENRE+roBERTa finetuning | ||
[]() | 0.0 | 0.0 | 0.0 | 72.34 | SVM with rbf kernel | ||
[]() | 0.0 | 0.0 | 0.0 | 71.58 | ElefPav | ||
[]() | 0.0 | 0.0 | 0.0 | 71.42 | Alessandro_Tansel | ||
[]() | 0.0 | 0.0 | 0.0 | 71.38 | JuanTran | ||
[]() | 0.0 | 0.0 | 0.0 | 71.24 | Logistic Regression | ||
[]() | 0.0 | 0.0 | 0.0 | 71.12 | QDA | ||
[]() | 0.0 | 0.0 | 0.0 | 70.71 | SVM | ||
[]() | 0.0 | 0.0 | 0.0 | 69.71 | stupidTeam | ||
[]() | 0.0 | 0.0 | 0.0 | 69.41 | QDA_EMB2 | ||
[]() | 0.0 | 0.0 | 0.0 | 68.43 | SVM | ||
[]() | 0.0 | 0.0 | 0.0 | 67.98 | Marco Aurelio Sterpa | ||
[]() | 0.0 | 0.0 | 0.0 | 61.6 | its_all_greek_to_me | ||
[]() | 0.0 | 0.0 | 0.0 | 33.58 | multi-task small | ||
[]() | 0.0 | 0.0 | 0.0 | 23.01 | LogisticRegression | ||
[]() | 0.0 | 0.0 | 0.0 | 12.57 | galimaldo |