OpenCodePapers

sentiment-analysis-on-slue

Sentiment Analysis
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeRecall (%)F1 (%)Text modelModelNameReleaseDate
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech✓ Link60.463.3DeBERTa-LW2V2-L-LL60K (pipeline approach, uses LM)2021-11-19
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech✓ Link60.263.3DeBERTa-LW2V2-L-LL60K (pipeline approach)2021-11-19
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech✓ Link60.062.9DeBERTa-LW2V2-B-LS960 (pipeline approach, uses LM)2021-11-19
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech✓ Link59.061.8DeBERTa-LW2V2-B-LS960 (pipeline approach)2021-11-19
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech✓ Link49.248.5N/AW2V2-L-LL60K (e2e approach)2021-11-19
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech✓ Link47.548.0N/AHuBERT-B-LS960 (e2e approach)2021-11-19
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech✓ Link46.046.6N/AW2V2-B-LS960 (e2e approach)2021-11-19
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech✓ Link38.738.4N/AW2V2-B-VP100K (e2e approach)2021-11-19