OpenCodePapers

long-context-understanding-on-longbench

Long-Context Understanding
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeAverage ScoreModelNameReleaseDate
A Training-Free Length Extrapolation Approach for LLMs: Greedy Attention Logit Interpolation (GALI)✓ Link46.22GALI(Llama3-8b-ins-4k-to-16k)2025-02-04
A Training-Free Length Extrapolation Approach for LLMs: Greedy Attention Logit Interpolation (GALI)✓ Link45.38GALI(Llama3-8b-ins-8k-to-32k)2025-02-04
A Training-Free Length Extrapolation Approach for LLMs: Greedy Attention Logit Interpolation (GALI)✓ Link45.17GALI(Llama3-8b-ins-8k-to-16k)2025-02-04