OpenCodePapers

semantic-textual-similarity-on-sts14

Semantic Textual Similarity
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeSpearman CorrelationModelNameReleaseDate
AnglE-optimized Text Embeddings✓ Link0.8689AnglE-LLaMA-13B2023-09-22
Scaling Sentence Embeddings with Large Language Models✓ Link0.8585PromptEOL+CSE+LLaMA-30B2023-07-31
AnglE-optimized Text Embeddings✓ Link0.8579AnglE-LLaMA-7B-v22023-09-22
AnglE-optimized Text Embeddings✓ Link0.8549AnglE-LLaMA-7B2023-09-22
Scaling Sentence Embeddings with Large Language Models✓ Link0.8534PromptEOL+CSE+OPT-13B2023-07-31
Scaling Sentence Embeddings with Large Language Models✓ Link0.8480PromptEOL+CSE+OPT-2.7B2023-07-31
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning✓ Link0.8381PromCSE-RoBERTa-large (0.355B)2022-03-14
SimCSE: Simple Contrastive Learning of Sentence Embeddings✓ Link0.8236SimCSE-RoBERTalarge2021-04-18
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations✓ Link0.8194Trans-Encoder-RoBERTa-large-cross (unsup.)2021-09-27
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations✓ Link0.8176Trans-Encoder-RoBERTa-large-bi (unsup.)2021-09-27
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations✓ Link0.8137Trans-Encoder-BERT-large-bi (unsup.)2021-09-27
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations✓ Link0.7903Trans-Encoder-RoBERTa-base-cross (unsup.)2021-09-27
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations✓ Link0.779Trans-Encoder-BERT-base-bi (unsup.)2021-09-27
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings✓ Link0.7647DiffCSE-BERT-base2022-04-21
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings✓ Link0.7549DiffCSE-RoBERTa-base2022-04-21
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks✓ Link0.7490000000000001SBERT-NLI-large2019-08-27
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders✓ Link0.732Mirror-RoBERTa-base (unsup.)2021-04-16
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders✓ Link0.713Mirror-BERT-base (unsup.)2021-04-16
Generating Datasets with Pretrained Language Models✓ Link0.7125Dino (STSb/̄🦕)2021-04-15
On the Sentence Embeddings from Pre-trained Language Models✓ Link0.6942BERTlarge-flow (target)2020-11-02
An Unsupervised Sentence Embedding Method by Mutual Information Maximization✓ Link0.6121IS-BERT-NLI2020-09-25