Retrieval-Augmented Generation-based Relation Extraction | ✓ Link | 86.6 | | | | | RAG4RE | 2024-04-20 |
DeepStruct: Pretraining of Language Models for Structure Prediction | ✓ Link | 76.8 | | | | | DeepStruct multi-task w/ finetune | 2022-05-21 |
Unified Semantic Typing with Meaningful Label Inference | ✓ Link | 75.5 | | | | | UNiST (LARGE) | 2022-05-04 |
Enhancing Targeted Minority Class Prediction in Sentence-Level Relation Extraction | ✓ Link | 75.4 | | | | | RE-MC | 2022-06-29 |
Generative Prompt Tuning for Relation Classification | ✓ Link | 75.3 | | | | | GenPT (T5) | 2022-10-22 |
Relation Classification with Entity Type Restriction | | 75.2 | | | | | RECENT+SpanBERT | 2021-05-18 |
Summarization as Indirect Supervision for Relation Extraction | ✓ Link | 75.1 | 70.7 | 64.9 | 52 | 20.6 | SuRE (PEGASUS-large) | 2022-05-19 |
Improving Sentence-Level Relation Extraction through Curriculum Learning | | 75.0 | | | | | EXOBRAIN | 2021-07-20 |
Relation Classification as Two-way Span-Prediction | | 74.8 | | | | | Relation Reduction | 2020-10-09 |
An Improved Baseline for Sentence-level Relation Extraction | ✓ Link | 74.6 | | | | | RoBERTa-large-typed-marker | 2021-02-02 |
Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction | ✓ Link | 73.9 | 67.9 | 69.0 | 63.7 | 62.8 | NLI_DeBERTa | 2021-09-08 |
Learning from Noisy Labels for Entity-Centric Information Extraction | ✓ Link | 73.0 | | | | | Noise-robust Co-regularization + BERT-large | 2021-04-17 |
DeNERT-KG: Named Entity and Relation Extraction Model Using DQN, Knowledge Graph, and BERT | | 72.4 | | | | | DeNERT-KG | 2020-09-15 |
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters | ✓ Link | 72.04 | 56.0 | 45.1 | 13.8 | | K-ADAPTER (F+L) | 2020-02-05 |
Structured Prediction as Translation between Augmented Natural Languages | ✓ Link | 71.9 | | | | | TANL | 2021-01-14 |
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation | ✓ Link | 71.7 | | | | | KEPLER | 2019-11-13 |
Matching the Blanks: Distributional Similarity for Relation Learning | ✓ Link | 71.5 | 64.8 | | 43.4 | | BERTEM+MTB | 2019-06-07 |
Knowledge Enhanced Contextual Word Representations | ✓ Link | 71.5 | | | | | KnowBert-W+W | 2019-09-09 |
Efficient long-distance relation extraction with DG-SpanBERT | | 71.5 | | | | | DG-SpanBERT-large | 2020-04-07 |
Sequence Generation with Label Augmentation for Relation Extraction | ✓ Link | 71.2 | | | | | RELA | 2022-12-29 |
Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction | ✓ Link | 71.0 | | | | | NLI_RoBERTa | 2021-09-08 |
SpanBERT: Improving Pre-training by Representing and Predicting Spans | ✓ Link | 70.8 | | | | | SpanBERT-large | 2019-07-24 |
GDPNet: Refining Latent Multi-View Graph for Relation Extraction | ✓ Link | 70.5 | | | | | GDPNet | 2020-12-12 |
Learning from Context or Names? An Empirical Study on Neural Relation Extraction | ✓ Link | 69.5 | | | | | Contrastive Pre-training | 2020-10-05 |
Enriching Pre-trained Language Model with Entity Information for Relation Classification | ✓ Link | 69.4 | | | | | R-BERT | 2019-05-20 |
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction | ✓ Link | 68.2 | | | | | C-GCN + PA-LSTM | 2018-09-26 |
Attention Guided Graph Convolutional Networks for Relation Extraction | ✓ Link | 68.2 | | | | | C-AGGCN | 2019-06-18 |
ERNIE: Enhanced Language Representation with Informative Entities | ✓ Link | 67.97 | | | | | ERNIE | 2019-05-17 |
Simple BERT Models for Relation Extraction and Semantic Role Labeling | ✓ Link | 67.8 | | | | | BERT-LSTM-base | 2019-04-10 |
Beyond Word Attention: Using Segment Attention in Neural Relation Extraction | ✓ Link | 67.6 | | | | | SA-LSTM+D | 2019-08-10 |
Improving Relation Extraction by Pre-trained Language Representations | ✓ Link | 67.4 | | | | | TRE | 2019-06-07 |
Improving Relation Extraction by Pre-trained Language Representations | ✓ Link | 67.4 | | | | | Alt et al. (2019) | 2019-06-07 |
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction | ✓ Link | 67.1 | | | | | GCN + PA-LSTM | 2018-09-26 |
Simplifying Graph Convolutional Networks | ✓ Link | 67.0 | | | | | C-SGC | 2019-02-19 |
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction | ✓ Link | 66.4 | | | | | C-GCN | 2018-09-26 |
Position-aware Attention and Supervised Data Improve Slot Filling | ✓ Link | 65.1 | | | | | PA-LSTM | 2017-09-01 |
Attention Guided Graph Convolutional Networks for Relation Extraction | ✓ Link | 65.1 | | | | | AGGCN | 2019-06-18 |
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction | ✓ Link | 64.0 | | | | | GCN | 2018-09-26 |
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors | ✓ Link | 52.2 | | | | | LLM-QA4RE (XXLarge) | 2023-05-18 |
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention | ✓ Link | | | 51.6 | 17.0 | | LUKE | 2020-10-02 |