Scaling Sentence Embeddings with Large Language Models | ✓ Link | 0.8020 | PromptEOL+CSE+OPT-13B | 2023-07-31 |
Scaling Sentence Embeddings with Large Language Models | ✓ Link | 0.7972 | PromptEOL+CSE+LLaMA-30B | 2023-07-31 |
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning | ✓ Link | 0.7956 | PromCSE-RoBERTa-large (0.355B) | 2022-03-14 |
Scaling Sentence Embeddings with Large Language Models | ✓ Link | 0.7949 | PromptEOL+CSE+OPT-2.7B | 2023-07-31 |
AnglE-optimized Text Embeddings | ✓ Link | 0.7868 | AnglE-LLaMA-7B | 2023-09-22 |
AnglE-optimized Text Embeddings | ✓ Link | 0.7868 | AnglE-LLaMA-13B | 2023-09-22 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.7828 | Trans-Encoder-RoBERTa-large-cross (unsup.) | 2021-09-27 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.7819 | Trans-Encoder-BERT-large-bi (unsup.) | 2021-09-27 |
SimCSE: Simple Contrastive Learning of Sentence Embeddings | ✓ Link | 0.7746 | SimCSE-RoBERTa-large | 2021-04-18 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.7637 | Trans-Encoder-RoBERTa-base-cross (unsup.) | 2021-09-27 |
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ✓ Link | 0.7509 | Trans-Encoder-BERT-base-bi (unsup.) | 2021-09-27 |
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | ✓ Link | 0.7453 | SRoBERTa-NLI-large | 2019-08-27 |
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings | ✓ Link | 0.7228 | DiffCSE-BERT-base | 2022-04-21 |
Generating Datasets with Pretrained Language Models | ✓ Link | 0.7027 | Dino (STSb/̄🦕) | 2021-04-15 |
SimCSE: Simple Contrastive Learning of Sentence Embeddings | ✓ Link | 0.7016 | SimCSE-RoBERTa-base | 2021-04-18 |
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings | ✓ Link | 0.7005 | DiffCSE-RoBERTa-base | 2022-04-21 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | ✓ Link | 0.674 | Mirror-BERT-base (unsup.) | 2021-04-16 |
On the Sentence Embeddings from Pre-trained Language Models | ✓ Link | 0.6520 | BERTlarge-flow (target) | 2020-11-02 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | ✓ Link | 0.648 | Mirror-RoBERTa-base (unsup.) | 2021-04-16 |
An Unsupervised Sentence Embedding Method by Mutual Information Maximization | ✓ Link | 0.5677 | IS-BERT-NLI | 2020-09-25 |