Paper | Code | ECtHR Task A | ECtHR Task B | SCOTUS | EUR-LEX | LEDGAR | UNFAIR-ToS | CaseHOLD | ModelName | ReleaseDate |
---|---|---|---|---|---|---|---|---|---|---|
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English | ✓ Link | 71.4 / 64.0 | 87.6 / 77.8 | 70.5 / 60.9 | 71.6 / 55.6 | 87.7 / 82.2 | 87.5 / 81.0 | 70.7 | BERT | 2021-10-03 |
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English | ✓ Link | 71.2 / 64.6 | 88.0 / 77.2 | 76.2 / 65.8 | 72.2 / 56.2 | 88.1 / 82.7 | 88.6 / 82.3 | 75.1 | Legal-BERT | 2021-10-03 |
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English | ✓ Link | 71.2 / 64.2 | 88.0 / 77.5 | 76.4 / 66.2 | 71.0 / 55.9 | 88.0 / 82.3 | 88.3 / 81.0 | 75.6 | CaseLaw-BERT | 2021-10-03 |
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English | ✓ Link | 70.5 / 63.8 | 88.1 / 76.6 | 71.7 / 61.4 | 71.8 / 56.6 | 87.7 / 82.1 | 87.7 / 80.2 | 70.4 | BigBird | 2021-10-03 |
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English | ✓ Link | 69.6 / 62.4 | 88.0 / 77.8 | 72.2 / 62.5 | 71.9 / 56.7 | 87.7 / 82.3 | 87.7 / 80.1 | 72.0 | Longformer | 2021-10-03 |
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English | ✓ Link | 69.5 / 60.7 | 87.2 / 77.3 | 70.8 / 61.2 | 71.8 / 57.5 | 87.9 / 82.1 | 87.7 / 81.5 | 71.7 | RoBERTa | 2021-10-03 |
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English | ✓ Link | 69.1 / 61.2 | 87.4 / 77.3 | 70.0 / 60.0 | 72.3 / 57.2 | 87.9 / 82.0 | 87.2 / 78.8 | 72.1 | DeBERTa | 2021-10-03 |
The Unreasonable Effectiveness of the Baseline: Discussing SVMs in Legal Text Classification | 66.3 / 55.0 | 76.0 / 65.4 | 74.4 / 64.5 | 65.7 / 49.0 | 88.0 / 82.6 | Optimised SVM Baseline | 2021-09-15 |