Paper | Code | Accuracy (2 classes) | F1 Macro | ModelName | ReleaseDate |
---|---|---|---|---|---|
Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | ✓ Link | 0.8798 | 0.8797 | Space-XLNet | 2024-01-30 |
Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | ✓ Link | 0.8160 | 0.8156 | XLNet | 2024-01-30 |
Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | ✓ Link | 0.8110 | 0.8108 | Space-BERT | 2024-01-30 |
Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | ✓ Link | 0.6588 | 0.6555 | BERT-base | 2024-01-30 |