Can we obtain significant success in RST discourse parsing by using Large Language Models? | ✓ Link | 58.1 | 79.8 | 70.4 | 60.0 | | | | | Bottom-up Llama 2 (70B) | 2024-03-08 |
Can we obtain significant success in RST discourse parsing by using Large Language Models? | ✓ Link | 56.0 | 78.8 | 68.7 | 57.7 | | | | | Top-down Llama 2 (70B) | 2024-03-08 |
Can we obtain significant success in RST discourse parsing by using Large Language Models? | ✓ Link | 56.0 | 78.3 | 68.1 | 57.8 | | | | | Bottom-up Llama 2 (13B) | 2024-03-08 |
Can we obtain significant success in RST discourse parsing by using Large Language Models? | ✓ Link | 55.8 | 78.2 | 67.5 | 57.6 | | | | | Bottom-up Llama 2 (7B) | 2024-03-08 |
Bilingual Rhetorical Structure Parsing with Large Parallel Annotations | ✓ Link | 55.7 ± 0.3 | 78.7 ± 0.4 | 68.0 ± 0.6 | 57.3 ± 0.2 | | | | | DMRST | 2024-09-23 |
Can we obtain significant success in RST discourse parsing by using Large Language Models? | ✓ Link | 55.6 | 78.6 | 67.9 | 57.7 | | | | | Top-down Llama 2 (13B) | 2024-03-08 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 55.4 ± 0.4 | 77.8 ± 0.3 | 68.0 ± 0.5 | 57.3 ± 0.2 | | | | | Bottom-up (DeBERTa) | 2022-10-15 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 54.8 | 77.8 | 67.4 | 57.0 | | | | | Top-down (XLNet) | 2022-10-15 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 54.4 | 78.5 | 67.9 | 56.6 | | | | | Top-down (DeBERTa) | 2022-10-15 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 54.2 | | 65.9 | 56.3 | | | | | Bottom-up (XLNet) | 2022-10-15 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 53.8 | 77.3 | 66.6 | 55.8 | | | | | Top-down (RoBERTa) | 2022-10-15 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 53.7 | 76.1 | 66.5 | 55.4 | | | | | Bottom-up (RoBERTa) | 2022-10-15 |
Can we obtain significant success in RST discourse parsing by using Large Language Models? | ✓ Link | 53.4 | 76.3 | 65.4 | 55.2 | | | | | Top-down Llama 2 (7B) | 2024-03-08 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 52.7 | | 65.3 | 54.9 | | | | | Bottom-up (SpanBERT) | 2022-10-15 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 52.2 | 76.5 | 65.4 | 54.5 | | | | | Top-down (SpanBERT) | 2022-10-15 |
Top-down Discourse Parsing via Sequence Labelling | ✓ Link | 50.3 | 73.1 | 62.3 | 51.5 | | | | | LSTM Dynamic | 2021-02-03 |
RST Parsing from Scratch | ✓ Link | 50.2 | 74.3 | 64.3 | 51.6 | | 87.6 | 76.0 | 61.8 | End-to-end Top-down (XLNet) | 2021-05-23 |
Top-down Discourse Parsing via Sequence Labelling | ✓ Link | 49.4 | 72.7 | 61.7 | 50.5 | | | | | LSTM Static | 2021-02-03 |
Top-down Discourse Parsing via Sequence Labelling | ✓ Link | 49.2 | 70.2 | 60.1 | | | | | | Transformer (dynamic) | 2021-02-03 |
Top-down Discourse Parsing via Sequence Labelling | ✓ Link | 49.0 | 70.6 | 59.9 | 50.6 | | | | | Transformer (static) | 2021-02-03 |
RST Parsing from Scratch | ✓ Link | 46.8 | 71.1 | 59.6 | 47.7 | | | | | End-to-end Top-down (Glove) | 2021-05-23 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 46.6 | 69.8 | 59.1 | 48.3 | | | | | Top-down (BERT) | 2022-10-15 |
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing | ✓ Link | 46.0 | 68.3 | 57.8 | 47.8 | | | | | Bottom-up (BERT) | 2022-10-15 |
Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining | | | 72.94 | 61.86 | | | | | | Guz et al. (2020) (pretrained) | 2020-11-06 |
Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining | | | 72.43 | 61.38 | | | | | | Guz et al. (2020) | 2020-11-06 |
Unleashing the Power of Neural Discourse Parsers - A Context and Structure Aware Approach Using Large Scale Pretraining | | | 72.43 | | | | | | | Guz et al. (2020) | 2020-12-01 |
Improving Neural RST Parsing Model with Silver Agreement Subtrees | | | | | | 62.6 | 87.1 | 75.0 | 63.2 | Top-down Span-based Parser with Silver Agreement Subtrees (ensemble) | 2021-06-01 |
Improving Neural RST Parsing Model with Silver Agreement Subtrees | | | | | | 61.8 | 86.8 | 74.7 | 62.5 | Top-down Span-based Parser with Silver Agreement Subtrees | 2021-06-01 |
Transition-based Neural RST Parsing with Implicit Syntax Features | ✓ Link | | | | | 59.9 | 85.5 | 73.1 | 60.2 | Transition-based Parser with Implicit Syntax Features | 2018-08-01 |
A Novel Discourse Parser Based on Support Vector Machine Classification | | | | | | 54.8 | 83.0 | 68.4 | 55.3 | HILDA Parser | 2009-08-02 |
Top-Down RST Parsing Utilizing Granularity Levels in Documents | ✓ Link | | | | | | 87.0 | 74.6 | 60.0 | Top-down Span-based Parser | 2020-04-03 |
A Two-Stage Parsing Method for Text-Level Discourse Analysis | ✓ Link | | | | | | 86.0 | 72.4 | 59.7 | Two-stage Parser | 2017-07-01 |
A Linear-Time Bottom-Up Discourse Parser with Constraints and Post-Editing | | | | | | | 85.7 | 71.0 | 58.2 | Bottom-up Linear-chain CRF-based Parser | 2014-06-01 |
CODRA: A Novel Discriminative Framework for Rhetorical Analysis | | | | | | | 83.84 | 68.90 | 55.87 | Two-stage Discourse Parser with a Sliding Window | 2015-09-01 |
Two Practical Rhetorical Structure Theory Parsers | | | | | | 54.9* | 82.6* | 67.1* | 55.4* | Greedy Bottom-up Parser with Syntactic Features | 2015-06-01 |
Empirical comparison of dependency conversions for RST discourse trees | | | | | | 54.3* | 82.6* | 66.6* | 54.6* | Re-implemented HILDA RST parser | 2016-09-01 |
Discourse Parsing with Attention-based Hierarchical Neural Networks | | | | | | 50.6* | 82.2* | 66.5* | 51.4* | Discourse Parser with Hierarchical Attention | 2016-11-01 |
Representation Learning for Text-level Discourse Parsing | ✓ Link | | | | | 57.6* | 82.0* | 68.2* | 57.8* | Discourse Parsing from Linear Projection | 2014-06-01 |
Cross-lingual RST Discourse Parsing | ✓ Link | | | | | 56.0* | 81.3* | 68.1* | 56.3* | Transition-Based Parser Trained on Cross-Lingual Corpus | 2017-01-11 |
Multi-view and multi-task training of RST discourse parsers | ✓ Link | | | | | 47.5* | 79.7* | 63.6* | 47.7* | LSTM Sequential Discourse Parser (Braud et al., 2016) | 2016-12-01 |