Structural Self-Supervised Objectives for Transformers | ✓ Link | 0.927 | 0.939 | TANDA-DeBERTa-V3-Large + ALL | 2023-09-15 |
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm | | 0.924 | 0.908 | RLAS-BIABC | 2023-01-07 |
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection | ✓ Link | 0.920 | 0.933 | TANDA-RoBERTa (ASNQ, WikiQA) | 2019-11-11 |
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection | | 0.909 | 0.920 | DeBERTa-V3-Large + ALL | 2022-05-20 |
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection | | 0.901 | 0.914 | DeBERTa-Large + SSP | 2022-05-20 |
Paragraph-based Transformer Pre-training for Multi-Sentence Inference | ✓ Link | 0.887 | 0.900 | RoBERTa-Base Joint MSPP | 2022-05-02 |
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection | | 0.887 | 0.899 | RoBERTa-Base + SSP | 2022-05-20 |
A Compare-Aggregate Model with Latent Clustering for Answer Selection | | 0.764 | 0.784 | Comp-Clip + LM + LC | 2019-05-30 |
Simple and Effective Text Matching with Richer Alignment Features | ✓ Link | 0.7452 | 0.7618 | RE2 | 2019-08-01 |
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering | ✓ Link | 0.712 | 0.727 | HyperQA | 2017-07-25 |
Pairwise Word Interaction Modeling with Deep Neural Networks for Semantic Similarity Measurement | | 0.7090 | 0.7234 | PWIM | 2016-06-01 |
Key-Value Memory Networks for Directly Reading Documents | ✓ Link | 0.7069 | 0.7265 | Key-Value Memory Network | 2016-06-09 |
Sentence Similarity Learning by Lexical Decomposition and Composition | ✓ Link | 0.7058 | 0.7226 | LDC | 2016-02-23 |
Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency | | 0.7010 | 0.7180 | PairwiseRank + Multi-Perspective CNN | 2018-09-06 |
Neural Variational Inference for Text Processing | ✓ Link | 0.6886 | 0.7069 | Attentive LSTM | 2015-11-19 |
Attentive Pooling Networks | ✓ Link | 0.6886 | 0.6957 | AP-CNN | 2016-02-11 |
Neural Variational Inference for Text Processing | ✓ Link | 0.682 | 0.6988 | LSTM (lexical overlap + dist output) | 2015-11-19 |
Neural Semantic Encoders | ✓ Link | 0.6811 | 0.6993 | MMA-NSE attention | 2016-07-14 |
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms | ✓ Link | 0.6788 | 0.6908 | SWEM-concat | 2018-05-24 |
Neural Variational Inference for Text Processing | ✓ Link | 0.6552 | 0.6747 | LSTM | 2015-11-19 |
Deep Learning for Answer Sentence Selection | ✓ Link | 0.6520 | 0.6652 | Bigram-CNN (lexical overlap + dist output) | 2014-12-04 |
WikiQA: A Challenge Dataset for Open-Domain Question Answering | | 0.6520 | 0.6652 | CNN-Cnt | 2015-09-01 |
Deep Learning for Answer Sentence Selection | ✓ Link | 0.6190 | 0.6281 | Bigram-CNN | 2014-12-04 |
Distributed Representations of Sentences and Documents | ✓ Link | 0.5976 | 0.6058 | Paragraph vector (lexical overlap + dist output) | 2014-05-16 |
Distributed Representations of Sentences and Documents | ✓ Link | 0.5110 | 0.5160 | Paragraph vector | 2014-05-16 |