AgMoDB
Models
Agents
Evals
Explore
Tools
Industry
Menu
WizardLM-2 8x22B | AgMoDB
All Models
WizardLM-2 8x22B
Last synced Apr 7, 2026, 2:04 PM
66K context
Blended Price
$0.62/M
Input Price
$0.62/M
Output Price
$0.62/M
Speed
—
TTFT
—
Benchmark Scores
Intelligence Index
External Benchmarks
AA-Omniscience Accuracy
Add to comparison
How WizardLM-2 8x22B Compares
Axes
X Axis
Blended Price (USD)
Y Axis
AgMoBench Overall
Bubble Size
Context Window
Filters
Blended Price (USD)
$0.00 – $30.0
AgMoBench Overall
≥ 3.5
Providers
ai21-labs
alibaba
anthropic
aws
azure
baidu
cohere
deepseek
+15 more
Table view
Filters
Show quadrants
Top models
anthropic
openai
google
meta
mistral
nvidia
kimi
xai
azure
deepseek
aws
cohere
baidu
ai21-labs
zai
reka-ai
xiaomi
alibaba
minimax
ibm
Bubble size = Context Window
16384.00
2000000.00
Compare with other models
—
Coding Index
—
Math Index
—
MMLU Pro
—
/ 100
GPQA Diamond
—
/ 100
HLE
—
/ 100
LiveCodeBench
—
/ 100
SciCode
—
/ 100
MATH-500
—
/ 100
AIME
—
/ 30
AIME 2025
—
/ 30
IFBench
—
/ 100
LCR
—
/ 100
Terminal-Bench Hard
—
/ 100
τ²-Bench
—
/ 100
Predicted
39.2
/ 100
AA-Omniscience Hallucination Rate
Predicted
95.5
/ 100
Aider Polyglot
Predicted
54.3
/ 100
AIME
Predicted
0.8
/ 30
AIME 2025
Predicted
0.7
/ 30
AlpacaEval 2.0 LC
Predicted
38.2
/ 100
AlpacaEval 2.0 Raw
Predicted
29.8
/ 100
ARC-AGI-1
Predicted
9.5
/ 100
ARC-AGI-1 Cost per Task
Predicted
0.0
ARC-AGI-2
Predicted
0.0
/ 100
ARC-AGI-2 Cost per Task
Predicted
0.1
BFCL (Berkeley Function Calling)
Predicted
48.8
BigCodeBench Complete
Predicted
60.1
/ 100
BigCodeBench Instruct
Predicted
49.8
/ 100
AA Intelligence Index (Matrix)
Predicted
69.3
AA Long Context Reasoning (Matrix)
Predicted
70.5
AIME 2024
Predicted
90.7
AIME 2025 (Matrix)
Predicted
85.1
Arena-Hard Auto
Predicted
81.8
BrowseComp
Predicted
54.9
BRUMO 2025
Predicted
80.6
CMIMC 2025
Predicted
86.0
CritPt
Predicted
0.2
GPQA Diamond (Matrix)
Predicted
78.2
GSM8K
Predicted
73.2
HLE (Matrix)
Predicted
65.0
HMMT Feb 2025
Predicted
63.8
HMMT Nov 2025
Predicted
88.6
HumanEval
Predicted
90.8
IFBench (Matrix)
Predicted
44.5
IFEval
Predicted
88.0
IMO 2025
Predicted
13.2
LiveCodeBench (Matrix)
Predicted
69.9
MATH-500 (Matrix)
Predicted
96.9
MathArena Apex 2025
Predicted
0.6
MMLU
Predicted
88.2
MMLU-Pro (Matrix)
Predicted
81.7
MMMU-Pro
Predicted
81.4
MRCR v2
Predicted
74.7
OSWorld
Predicted
34.8
SimpleQA
Predicted
24.4
SMT 2025
Predicted
78.5
SWE-bench Pro
Predicted
34.7
Tau-Bench Telecom (Matrix)
Predicted
95.0
Terminal-Bench 2.0
Predicted
17.0
Terminal-Bench 1.0
Predicted
18.4
USAMO 2025
Predicted
9.6
Video-MMU
Predicted
86.8
browsecomp
Predicted
50.5
BullshitBench
Predicted
56.8
/ 100
Aider Polyglot
Predicted
0.1
Apex Agents
Predicted
2.0
Arc Agi 2
Predicted
5.2
BALROG
Predicted
0.0
BIG-Bench Hard
Predicted
3.0
BoolQ
Predicted
0.8
CAD-Eval
Predicted
4.1
Chess Puzzles
Predicted
0.1
CyBench
Predicted
0.2
DeepResearchBench
Predicted
0.3
FictionLiveBench
Predicted
0.6
Gdpval
Predicted
0.2
GeoBench
Predicted
0.0
GSM8K (Epoch)
Predicted
14.7
GSO
Predicted
0.0
HellaSwag
Predicted
1.2
Hle
Predicted
0.2
Lech Mazur Writing
Predicted
7.7
METR Time Horizons
Predicted
0.7
OTIS Mock AIME 2024–2025
Predicted
0.5
PIQA
Predicted
0.9
Posttrainbench
Predicted
0.0
SimpleQA Verified (Epoch)
Predicted
0.3
The Agent Company
Predicted
2.1
TriviaQA
Predicted
6.8
VPCT
Predicted
0.4
WinoGrande
Predicted
0.9
FrontierMath
Predicted
35.6
/ 100
GAIA Level 1
Predicted
69.8
GAIA Level 2
Predicted
58.0
GAIA Level 3
Predicted
37.3
GAIA
Predicted
53.5
/ 100
GPQA Diamond
Predicted
0.7
/ 100
HLE
Predicted
0.1
/ 100
IFBench
Predicted
0.5
/ 100
LCR
Predicted
0.2
/ 100
LegalBench
Predicted
91.2
/ 100
LiveBench Coding
Predicted
64.6
/ 100
LiveBench Data Analysis
Predicted
40.8
/ 100
LiveBench Language
Predicted
50.0
/ 100
LiveBench Math
Predicted
58.2
/ 100
LiveBench Overall
Predicted
43.6
/ 100
LiveBench Reasoning
Predicted
37.4
/ 100
LiveCodeBench
Predicted
0.6
/ 100
LongBench v2 Easy
Predicted
53.2
LongBench v2 Hard
Predicted
48.4
LongBench v2
Predicted
38.2
/ 100
MATH-500
Predicted
0.9
/ 100
MathVista
Predicted
60.0
/ 100
MedQA (USMLE)
Predicted
89.9
MLE-bench
Predicted
20.0
/ 100
MMLU Pro
Predicted
0.8
/ 100
MMMU
Predicted
76.2
/ 100
MMTU Table Understanding
Predicted
52.7
/ 100
MT-Bench
Predicted
8.0
/ 10
NoLiMa (NIAH)
Predicted
94.7
/ 100
OCRBench v2
Predicted
85.9
/ 100
RE-Bench
Predicted
1.3
SciCode
Predicted
0.5
/ 100
SimpleBench
Predicted
29.2
/ 100
simpleqa
Predicted
26.3
SWE-bench Lite
Predicted
32.8
/ 100
SWE-bench Verified
Predicted
41.2
/ 100
τ²-Bench
Predicted
0.3
/ 100
tau-bench Retail
Predicted
75.5
/ 100
Terminal-Bench Hard
Predicted
0.3
/ 100
Vectara Factual Consistency
Predicted
93.2
/ 100
Vectara Hallucination Rate
Predicted
6.8
/ 100
WebArena
Predicted
8.8
/ 100
WeirdML
Predicted
40.2
/ 100
WildBench
Predicted
53.8
Open LLM Average
open_llm_leaderboard
33.1
/ 100
Open LLM: BBH
open_llm_leaderboard
63.8
/ 100
Open LLM: GPQA
open_llm_leaderboard
38.2
/ 100
Open LLM: IFEval
open_llm_leaderboard
52.7
/ 100
Open LLM: MATH Level 5
open_llm_leaderboard
25.0
/ 100
Open LLM: MMLU-PRO
open_llm_leaderboard
46.0
/ 100
Open LLM: MUSR
open_llm_leaderboard
43.9
/ 100