A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.901 | 0.8807 | 0.5524 | 0.4017 | 0.8839 | 0.4648 | 0.9818 | 0.9581 | RoBERTa Focal Loss | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8982 | 0.8734 | 0.4845 | 0.3247 | 0.9104 | 0.3541 | 0.979 | 0.9499 | AlBERT | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8945 | 0.878 | 0.4928 | 0.3363 | 0.9216 | 0.3612 | 0.979 | 0.9603 | BERTweet | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8915 | 0.8744 | 0.4844 | 0.3297 | 0.9165 | 0.3679 | 0.9791 | 0.9589 | HateBERT | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8901 | 0.88 | 0.5359 | 0.3836 | 0.8891 | 0.4749 | 0.9813 | 0.9616 | RoBERTa BCE | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8859 | | 0.468 | 0.3135 | 0.923 | | | | XLM RoBERTa | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8834 | 0.8689 | 0.4586 | 0.3045 | 0.9254 | 0.3336 | | 0.9597 | XLNet | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.874 | 0.8762 | 0.5115 | 0.3572 | 0.9001 | 0.3879 | 0.9804 | 0.9644 | DistilBERT | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8616 | | | | | | | | BiGRU | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8493 | 0.8421 | 0.5958 | 0.4835 | 0.7759 | 0.4648 | 0.966 | | Unfreeze Glove ResNet 44 | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8445 | 0.8487 | | | 0.8707 | 0.3778 | 0.9639 | | Unfreeze Glove ResNet 56 | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.8307 | 0.8133 | 0.4874 | 0.3507 | 0.7983 | 0.3428 | 0.9526 | 0.9447 | Compact Convolutional Transformer (CCT) | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | 0.7876 | 0.8219 | 0.5591 | 0.4631 | 0.7053 | 0.4189 | | | Freeze Glove ResNet 44 | 2023-01-26 |
A benchmark for toxic comment classification on Civil Comments dataset | ✓ Link | | 0.8636 | 0.5115 | 0.3572 | | | | | BiLSTM | 2023-01-26 |
PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | ✓ Link | | | | | | | 0.97 | | ResNet + RoBERTa finetune | 2024-03-31 |
PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | ✓ Link | | | | | | | 0.947 | | Trompt + OpenAI embedding | 2024-03-31 |
PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | ✓ Link | | | | | | | 0.945 | | ResNet + OpenAI embedding | 2024-03-31 |
PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | ✓ Link | | | | | | | 0.885 | | Trompt + RoBERTa embedding | 2024-03-31 |
PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | ✓ Link | | | | | | | 0.882 | | ResNet + RoBERTa embedding | 2024-03-31 |
PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | ✓ Link | | | | | | | 0.865 | | LightGBM + RoBERTa embedding | 2024-03-31 |
PaLM 2 Technical Report | ✓ Link | | | | | | | 0.8535 | | PaLM 2 (few-shot, k=10) | 2023-05-17 |
PaLM 2 Technical Report | ✓ Link | | | | | | | 0.7596 | | PaLM 2 (zero-shot) | 2023-05-17 |