OpenCodePapers

discourse-parsing-on-rst-dt

Discourse Parsing
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeStandard Parseval (Full)Standard Parseval (Span)Standard Parseval (Nuclearity)Standard Parseval (Relation)RST-Parseval (Full)RST-Parseval (Span)RST-Parseval (Nuclearity)RST-Parseval (Relation)ModelNameReleaseDate
Can we obtain significant success in RST discourse parsing by using Large Language Models?✓ Link58.179.870.460.0Bottom-up Llama 2 (70B)2024-03-08
Can we obtain significant success in RST discourse parsing by using Large Language Models?✓ Link 56.078.8 68.757.7Top-down Llama 2 (70B)2024-03-08
Can we obtain significant success in RST discourse parsing by using Large Language Models?✓ Link56.0 78.368.157.8Bottom-up Llama 2 (13B)2024-03-08
Can we obtain significant success in RST discourse parsing by using Large Language Models?✓ Link55.8 78.267.557.6Bottom-up Llama 2 (7B)2024-03-08
Bilingual Rhetorical Structure Parsing with Large Parallel Annotations✓ Link55.7 ± 0.378.7 ± 0.468.0 ± 0.657.3 ± 0.2DMRST2024-09-23
Can we obtain significant success in RST discourse parsing by using Large Language Models?✓ Link 55.678.667.957.7Top-down Llama 2 (13B)2024-03-08
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link55.4 ± 0.477.8 ± 0.368.0 ± 0.557.3 ± 0.2Bottom-up (DeBERTa)2022-10-15
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link54.877.867.457.0Top-down (XLNet)2022-10-15
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link54.478.567.956.6Top-down (DeBERTa)2022-10-15
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link54.265.956.3Bottom-up (XLNet)2022-10-15
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link53.877.366.655.8Top-down (RoBERTa)2022-10-15
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link53.776.166.555.4Bottom-up (RoBERTa)2022-10-15
Can we obtain significant success in RST discourse parsing by using Large Language Models?✓ Link53.476.365.455.2Top-down Llama 2 (7B)2024-03-08
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link52.765.354.9Bottom-up (SpanBERT)2022-10-15
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link52.276.565.454.5Top-down (SpanBERT)2022-10-15
Top-down Discourse Parsing via Sequence Labelling✓ Link50.373.162.351.5LSTM Dynamic2021-02-03
RST Parsing from Scratch✓ Link50.274.364.351.687.676.061.8End-to-end Top-down (XLNet)2021-05-23
Top-down Discourse Parsing via Sequence Labelling✓ Link49.472.761.750.5LSTM Static2021-02-03
Top-down Discourse Parsing via Sequence Labelling✓ Link49.270.260.1Transformer (dynamic)2021-02-03
Top-down Discourse Parsing via Sequence Labelling✓ Link49.070.659.950.6Transformer (static)2021-02-03
RST Parsing from Scratch✓ Link46.871.159.647.7End-to-end Top-down (Glove)2021-05-23
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link46.669.859.148.3Top-down (BERT)2022-10-15
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing✓ Link46.068.357.847.8Bottom-up (BERT)2022-10-15
Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining72.9461.86Guz et al. (2020) (pretrained)2020-11-06
Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining72.4361.38Guz et al. (2020)2020-11-06
Unleashing the Power of Neural Discourse Parsers - A Context and Structure Aware Approach Using Large Scale Pretraining72.43Guz et al. (2020)2020-12-01
Improving Neural RST Parsing Model with Silver Agreement Subtrees62.687.175.063.2Top-down Span-based Parser with Silver Agreement Subtrees (ensemble)2021-06-01
Improving Neural RST Parsing Model with Silver Agreement Subtrees61.886.874.762.5Top-down Span-based Parser with Silver Agreement Subtrees2021-06-01
Transition-based Neural RST Parsing with Implicit Syntax Features✓ Link59.985.573.160.2Transition-based Parser with Implicit Syntax Features2018-08-01
A Novel Discourse Parser Based on Support Vector Machine Classification54.883.068.455.3HILDA Parser2009-08-02
Top-Down RST Parsing Utilizing Granularity Levels in Documents✓ Link87.074.660.0Top-down Span-based Parser2020-04-03
A Two-Stage Parsing Method for Text-Level Discourse Analysis✓ Link86.072.459.7Two-stage Parser2017-07-01
A Linear-Time Bottom-Up Discourse Parser with Constraints and Post-Editing85.771.058.2Bottom-up Linear-chain CRF-based Parser2014-06-01
CODRA: A Novel Discriminative Framework for Rhetorical Analysis83.8468.9055.87Two-stage Discourse Parser with a Sliding Window2015-09-01
Two Practical Rhetorical Structure Theory Parsers54.9*82.6*67.1*55.4*Greedy Bottom-up Parser with Syntactic Features2015-06-01
Empirical comparison of dependency conversions for RST discourse trees54.3*82.6*66.6*54.6*Re-implemented HILDA RST parser2016-09-01
Discourse Parsing with Attention-based Hierarchical Neural Networks50.6*82.2*66.5*51.4*Discourse Parser with Hierarchical Attention2016-11-01
Representation Learning for Text-level Discourse Parsing✓ Link57.6*82.0*68.2*57.8*Discourse Parsing from Linear Projection2014-06-01
Cross-lingual RST Discourse Parsing✓ Link56.0*81.3*68.1*56.3*Transition-Based Parser Trained on Cross-Lingual Corpus2017-01-11
Multi-view and multi-task training of RST discourse parsers✓ Link47.5*79.7*63.6*47.7*LSTM Sequential Discourse Parser (Braud et al., 2016)2016-12-01