Scaling Sentence Embeddings with Large Language Models | ✓ Link | 0.9004 | PromptEOL+CSE+LLaMA-30B | 2023-07-31 |
AnglE-optimized Text Embeddings | ✓ Link | 0.8956 | AnglE-LLaMA-13B | 2023-09-22 |
Scaling Sentence Embeddings with Large Language Models | ✓ Link | 0.8952 | PromptEOL+CSE+OPT-13B | 2023-07-31 |
Scaling Sentence Embeddings with Large Language Models | ✓ Link | 0.8951 | PromptEOL+CSE+OPT-2.7B | 2023-07-31 |
AnglE-optimized Text Embeddings | ✓ Link | 0.8943 | AnglE-LLaMA-7B-v2 | 2023-09-22 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.8863 | Trans-Encoder-RoBERTa-large-cross (unsup.) | 2021-09-27 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.8816 | Trans-Encoder-BERT-large-bi (unsup.) | 2021-09-27 |
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning | ✓ Link | 0.8808 | PromCSE-RoBERTa-large (0.355B) | 2022-03-14 |
SimCSE: Simple Contrastive Learning of Sentence Embeddings | ✓ Link | 0.8666 | SimCSE-RoBERTalarge | 2021-04-18 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.8577 | Trans-Encoder-RoBERTa-base-cross (unsup.) | 2021-09-27 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.8508 | Trans-Encoder-BERT-base-bi (unsup.) | 2021-09-27 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.8444 | Trans-Encoder-BERT-base-cross (unsup.) | 2021-09-27 |
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings | ✓ Link | 0.8390 | DiffCSE-BERT-base | 2022-04-21 |
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings | ✓ Link | 0.8281 | DiffCSE-RoBERTa-base | 2022-04-21 |
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | ✓ Link | 0.8185 | SRoBERTa-NLI-large | 2019-08-27 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | ✓ Link | 0.814 | Mirror-BERT-base (unsup.) | 2021-04-16 |
Generating Datasets with Pretrained Language Models | ✓ Link | 0.8049 | Dino (STSb/) | 2021-04-15 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | ✓ Link | 0.798 | Mirror-RoBERTa-base (unsup.) | 2021-04-16 |
An Unsupervised Sentence Embedding Method by Mutual Information Maximization | ✓ Link | 0.7523 | IS-BERT-NLI | 2020-09-25 |
On the Sentence Embeddings from Pre-trained Language Models | ✓ Link | 0.7492 | BERTlarge-flow (target) | 2020-11-02 |