Universal Evasion Attacks on Summarization Scoring | ✓ Link | 48.18 | 19.84 | 45.35 | Scrambled code + broken (alter) | 2022-10-25 |
BRIO: Bringing Order to Abstractive Summarization | ✓ Link | 47.78 | 23.55 | 44.57 | BRIO | 2022-03-31 |
Calibrating Sequence likelihood Improves Conditional Language Generation | | 47.36 | 24.02 | 44.45 | Pegasus | 2022-09-30 |
SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization | ✓ Link | 47.16 | 22.61 | 43.87 | PEGASUS + SummaReranker | 2022-03-13 |
Universal Evasion Attacks on Summarization Scoring | ✓ Link | 46.71 | 20.39 | 43.56 | Scrambled code + broken | 2022-10-25 |
SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization | ✓ Link | 46.67 | 22.15 | 43.54 | BART + SimCLS | 2021-06-03 |
Salience Allocation as Guidance for Abstractive Summarization | ✓ Link | 46.27 | 22.64 | 43.08 | SEASON | 2022-10-22 |
Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator | ✓ Link | 44.76 | 21.55 | 41.34 | Fourier Transformer | 2023-05-24 |
GLM: General Language Model Pretraining with Autoregressive Blank Infilling | ✓ Link | 44.7 | 21.4 | 41.4 | GLM-XXLarge | 2021-03-18 |
R-Drop: Regularized Dropout for Neural Networks | ✓ Link | 44.51 | 21.58 | 41.24 | BART + R-Drop | 2021-06-28 |
Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization | ✓ Link | 44.50 | 21.55 | 41.24 | CoCoNet + CoCoPretrain | |
Muppet: Massive Multi-task Representations with Pre-Finetuning | ✓ Link | 44.45 | 21.25 | 41.4 | MUPPET BART Large | 2021-01-26 |
Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization | ✓ Link | 44.39 | 21.41 | 41.05 | CoCoNet | |
Better Fine-Tuning by Reducing Representational Collapse | ✓ Link | 44.38 | 21.53 | 41.17 | BART+R3F | 2020-08-06 |
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation | ✓ Link | 44.31 | 21.35 | 41.60 | ERNIE-GENLARGE (large-scale text corpora) | 2020-01-26 |
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation | ✓ Link | 44.30 | 21.12 | 41.41 | PALM | 2020-04-14 |
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training | ✓ Link | 44.20 | 21.17 | 41.30 | ProphetNet | 2020-01-13 |
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization | ✓ Link | 44.17 | 21.47 | 41.11 | PEGASUS | 2019-12-18 |
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension | ✓ Link | 44.16 | 21.28 | 40.90 | BART | 2019-10-29 |
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation | ✓ Link | 44.02 | 21.17 | 41.26 | ERNIE-GENLARGE | 2020-01-26 |
LongT5: Efficient Text-To-Text Transformer for Long Sequences | ✓ Link | 43.94 | 21.40 | 41.28 | LongT5 | 2021-12-15 |
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer | ✓ Link | 43.52 | 21.55 | 40.69 | T5 | 2019-10-23 |
Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model | ✓ Link | 43.19 | 19.80 | 40.40 | SRformer-BART | 2023-05-24 |
UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training | ✓ Link | 43.16 | 20.42 | 40.14 | UniLMv2 | 2020-02-28 |
Unified Language Model Pre-training for Natural Language Understanding and Generation | ✓ Link | 43.08 | 20.43 | 40.34 | UniLM | 2019-05-08 |
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation | ✓ Link | 42.30 | 19.92 | 39.68 | ERNIE-GENBASE | 2020-01-26 |
Text Summarization with Pretrained Encoders | ✓ Link | 42.13 | 19.6 | 39.18 | BertSumExtAbs | 2019-08-22 |
Summary Level Training of Sentence Rewriting for Abstractive Summarization | | 41.90 | 19.08 | 39.64 | BERT-ext + abs + RL + rerank | 2019-09-19 |
Mixture Content Selection for Diverse Sequence Generation | ✓ Link | 41.72 | 18.74 | 38.79 | Selector & Pointer-Generator | 2019-09-04 |
Pretraining-Based Natural Language Generation for Text Summarization | ✓ Link | 41.71 | 19.49 | 38.79 | Two-Stage + RL | 2019-02-25 |
Deep Communicating Agents for Abstractive Summarization | | 41.69 | 19.47 | 37.92 | DCA | 2018-03-27 |
Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling | | 41.54 | 18.18 | 36.47 | Li et al. | 2018-10-01 |
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting | ✓ Link | 41.47 | 18.72 | 37.76 | rnn-ext + RL | 2018-05-28 |
An Editorial Network for Enhanced Document Summarization | | 41.42 | 19.03 | 38.36 | EditNet | 2019-02-27 |
Bottom-Up Abstractive Summarization | ✓ Link | 41.22 | 18.68 | 38.34 | Bottom-Up Summarization | 2018-08-31 |
Mask Attention Networks: Rethinking and Strengthen Transformer | ✓ Link | 40.98 | 18.29 | 37.88 | Mask Attention Network | 2021-03-25 |
Subformer: A Parameter Reduced Transformer | | 40.9 | 18.3 | 37.7 | Subformer-base | 2021-01-01 |
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting | ✓ Link | 40.88 | 17.80 | 38.54 | rnn-ext + abs + RL + rerank | 2018-05-28 |
A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss | ✓ Link | 40.68 | 17.97 | 37.13 | end2end w/ inconsistency loss | 2018-05-16 |
Closed-Book Training to Improve Summarization Encoder Memory | | 40.66 | 17.87 | 37.06 | RL + pg + cbdec | 2018-09-12 |
Multi-Reward Reinforced Summarization with Saliency and Entailment | | 40.43 | 18.00 | 37.10 | ROUGESal+Ent RL | 2018-04-17 |
Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond | ✓ Link | 40.42 | 17.62 | 36.67 | LEAD-3 | 2016-02-19 |
Improving Neural Abstractive Document Summarization with Structural Regularization | | 40.30 | 18.02 | 37.36 | Li et al. | 2018-10-01 |
Improving Abstraction in Text Summarization | | 40.19 | 17.38 | 37.52 | ML+RL ROUGE+Novel, with LM | 2018-08-23 |
Pay Less Attention with Lightweight and Dynamic Convolutions | ✓ Link | 39.84 | 16.25 | 36.73 | Dynamic Conv | 2019-01-29 |
Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation | | 39.81 | 17.64 | 36.54 | Pointer + Coverage + EntailmentGen + QuestionGen | 2018-05-28 |
Get To The Point: Summarization with Pointer-Generator Networks | ✓ Link | 39.53 | 17.28 | 36.38 | PTGEN + Coverage | 2017-04-14 |
Get To The Point: Summarization with Pointer-Generator Networks | ✓ Link | 39.53 | 17.28 | 36.38 | PTGEN + Coverage | 2017-04-14 |
Get To The Point: Summarization with Pointer-Generator Networks | ✓ Link | 39.53 | 17.28 | | Pointer-Generator + Coverage | 2017-04-14 |
Attention Is All You Need | ✓ Link | 39.50 | 16.06 | 36.63 | Transformer | 2017-06-12 |
The Summary Loop: Learning to Write Abstractive Summaries Without Examples | ✓ Link | 37.7 | | | Summary Loop Unsup | 2021-05-11 |
CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation | ✓ Link | | | 27.4 | CriSPO 3-shot | 2024-10-03 |
DELTA: A DEep learning based Language Technology plAtform | ✓ Link | | | 27.3 | DELTA (BLSTM) | 2019-08-02 |