AgMoDB
Models
Agents
Evals
Explore
Tools
Industry
Menu
DeepSeek R1 Distill Llama 8B | AgMoDB
All Models
DeepSeek R1 Distill Llama 8B
January 20, 2025
Last synced Apr 7, 2026, 4:00 PM
Blended Price
Free/M
Input Price
Free/M
Output Price
Free/M
Speed
0 tok/s
TTFT
0.00s
Benchmark Scores
Intelligence Index
External Benchmarks
AIME 2024
benchmark_matrix
Add to comparison
How DeepSeek R1 Distill Llama 8B Compares
Axes
X Axis
Blended Price (USD)
Y Axis
AgMoBench Overall
Bubble Size
Context Window
Filters
Blended Price (USD)
$0.00 – $30.0
AgMoBench Overall
≥ 3.5
Providers
ai21-labs
alibaba
anthropic
aws
azure
baidu
cohere
deepseek
+15 more
Table view
Filters
Show quadrants
Top models
anthropic
openai
google
meta
mistral
nvidia
kimi
xai
azure
deepseek
aws
cohere
baidu
ai21-labs
zai
reka-ai
xiaomi
alibaba
minimax
ibm
Bubble size = Context Window
16384.00
2000000.00
Compare with other models
12.1
Coding Index
—
Math Index
41.3
MMLU Pro
0.5
/ 100
GPQA Diamond
0.3
/ 100
HLE
0.0
/ 100
LiveCodeBench
0.2
/ 100
SciCode
0.1
/ 100
MATH-500
0.9
/ 100
AIME
0.3
/ 30
AIME 2025
0.4
/ 30
IFBench
0.2
/ 100
LCR
0.0
/ 100
Terminal-Bench Hard
—
/ 100
τ²-Bench
—
/ 100
50.4
AIME 2025 (Matrix)
benchmark_matrix
27.8
Arena-Hard Auto
benchmark_matrix
17.6
Codeforces Rating
benchmark_matrix
1205.0
GPQA Diamond (Matrix)
benchmark_matrix
49.0
IFEval
benchmark_matrix
59.0
LiveCodeBench (Matrix)
benchmark_matrix
42.5
MATH-500 (Matrix)
benchmark_matrix
89.1
AA-Omniscience Accuracy
Predicted
50.1
/ 100
AA-Omniscience Hallucination Rate
Predicted
97.5
/ 100
Aider Polyglot
Predicted
29.3
/ 100
AlpacaEval 2.0 LC
Predicted
8.8
/ 100
AlpacaEval 2.0 Raw
Predicted
6.5
/ 100
ARC-AGI-1
Predicted
37.0
/ 100
ARC-AGI-1 Cost per Task
Predicted
0.3
ARC-AGI-2
Predicted
15.5
/ 100
ARC-AGI-2 Cost per Task
Predicted
0.4
BFCL (Berkeley Function Calling)
Predicted
27.3
AA Intelligence Index (Matrix)
Predicted
41.8
AA Long Context Reasoning (Matrix)
Predicted
77.0
BrowseComp
Predicted
86.8
BRUMO 2025
Predicted
99.7
CMIMC 2025
Predicted
90.2
CritPt
Predicted
13.9
GSM8K
Predicted
90.1
HLE (Matrix)
Predicted
18.0
HMMT Feb 2025
Predicted
80.0
HMMT Nov 2025
Predicted
94.3
HumanEval
Predicted
73.9
IFBench (Matrix)
Predicted
25.9
IMO 2025
Predicted
47.1
MathArena Apex 2025
Predicted
17.5
MMLU
Predicted
74.6
MMLU-Pro (Matrix)
Predicted
58.6
MMMU-Pro
Predicted
76.7
MRCR v2
Predicted
81.8
OSWorld
Predicted
49.7
SimpleQA
Predicted
35.7
SMT 2025
Predicted
87.7
SWE-bench Pro
Predicted
38.9
Tau-Bench Telecom (Matrix)
Predicted
99.0
Terminal-Bench 2.0
Predicted
75.2
Terminal-Bench 1.0
Predicted
34.7
USAMO 2025
Predicted
13.9
Video-MMU
Predicted
60.8
browsecomp
Predicted
88.4
BullshitBench
Predicted
21.2
/ 100
Aider Polyglot
Predicted
0.5
Apex Agents
Predicted
4.3
Arc Agi 2
Predicted
0.1
BALROG
Predicted
0.0
BIG-Bench Hard
Predicted
3.0
BoolQ
Predicted
0.9
CAD-Eval
Predicted
6.6
Chess Puzzles
Predicted
0.4
CyBench
Predicted
0.2
DeepResearchBench
Predicted
0.4
FictionLiveBench
Predicted
0.6
Gdpval
Predicted
0.6
GeoBench
Predicted
0.0
GSM8K (Epoch)
Predicted
4.8
GSO
Predicted
0.8
HellaSwag
Predicted
0.0
Hle
Predicted
0.2
Lech Mazur Writing
Predicted
7.3
METR Time Horizons
Predicted
13.7
OTIS Mock AIME 2024–2025
Predicted
0.1
PIQA
Predicted
0.8
Posttrainbench
Predicted
0.0
SimpleQA Verified (Epoch)
Predicted
0.6
The Agent Company
Predicted
1.1
TriviaQA
Predicted
15.6
VPCT
Predicted
0.2
WinoGrande
Predicted
0.7
FrontierMath
Predicted
27.6
/ 100
GAIA Level 1
Predicted
12.5
GAIA Level 2
Predicted
1.2
GAIA Level 3
Predicted
0.0
GAIA
Predicted
7.3
/ 100
LegalBench
Predicted
27.6
/ 100
LiveBench Coding
Predicted
78.3
/ 100
LiveBench Data Analysis
Predicted
70.0
/ 100
LiveBench Language
Predicted
81.2
/ 100
LiveBench Math
Predicted
86.7
/ 100
LiveBench Overall
Predicted
74.4
/ 100
LiveBench Reasoning
Predicted
78.9
/ 100
LongBench v2 Easy
Predicted
32.5
LongBench v2 Hard
Predicted
26.5
LongBench v2
Predicted
32.9
/ 100
MathVista
Predicted
45.7
/ 100
MedQA (USMLE)
Predicted
69.3
MLE-bench
Predicted
58.8
/ 100
MMMU
Predicted
60.7
/ 100
MMTU Table Understanding
Predicted
60.1
/ 100
MT-Bench
Predicted
5.7
/ 10
NoLiMa (NIAH)
Predicted
92.3
/ 100
OCRBench v2
Predicted
58.7
/ 100
RE-Bench
Predicted
100.0
SimpleBench
Predicted
37.8
/ 100
simpleqa
Predicted
35.7
SWE-bench Lite
Predicted
13.3
/ 100
SWE-bench Verified
Predicted
45.9
/ 100
τ²-Bench
Predicted
0.0
/ 100
tau-bench Retail
Predicted
92.1
/ 100
Terminal-Bench Hard
Predicted
0.0
/ 100
Vectara Factual Consistency
Predicted
86.8
/ 100
Vectara Hallucination Rate
Predicted
13.2
/ 100
WebArena
Predicted
0.1
/ 100
WeirdML
Predicted
26.5
/ 100
WildBench
Predicted
28.3
BigCodeBench Complete
bigcodebench
15.3
/ 100
BigCodeBench Instruct
bigcodebench
10.6
/ 100
HuggingFace Downloads (30d)
hf-downloads
1753480.0
HuggingFace Likes
hf-downloads
844.0
Open LLM Average
open_llm_leaderboard
13.1
/ 100
Open LLM: BBH
open_llm_leaderboard
32.4
/ 100
Open LLM: GPQA
open_llm_leaderboard
25.5
/ 100
Open LLM: IFEval
open_llm_leaderboard
37.8
/ 100
Open LLM: MATH Level 5
open_llm_leaderboard
22.0
/ 100
Open LLM: MMLU-PRO
open_llm_leaderboard
20.9
/ 100
Open LLM: MUSR
open_llm_leaderboard
32.5
/ 100