Total number of trainable parameters in the model, as catalogued by Epoch AI's frontier model database.
Simple, well-understood metric that correlates with model capability. Consistently reported across most models.
Parameter count alone doesn't determine performance — architecture, training data quality, and compute all matter. MoE models complicate comparisons.
Lower is better for deployment efficiency at similar quality. Values are parameter counts and can range from millions to trillions.
Parameter count alone doesn't determine performance — architecture, training data quality, and compute all matter.
| # | Model | Score |
|---|---|---|
| 1 | PALM-2 | 340000000000.0 |
| 2 | Meta: Llama 3.1 405B Instruct | 405000000000.0 |
| 3 | Writer: Palmyra X5 | 540350000000.0 |
| 4 | Switchpoint Router | 1571000000000.0 |
| 5 | GPT-4 | 1800000000000.0 |
| 6 | Grok 3 | 3000000000000.0 |
| 7 | Grok 4 | 3000000000000.0 |