RUN THIS LLM
Search local LLM hardware requirements
Gemma 4 31B
Google · 31B · Vision
Gemma 4 largest dense model. 31B params with strong multimodal and reasoning capabilities across vision and language tasks.
VRAM Requirements
| Quantization | VRAM |
|---|---|
| Q4_K_M (smallest) | 18.6 GB |
| Q8_0 (balanced) | 34.1 GB |
| FP16 (full quality) | 62 GB |
Specifications
- Parameters: 31B
- Category: Vision
- Max context: 128K tokens
- System RAM: 32 GB minimum
- HuggingFace: google/gemma-4-31B-it
Benchmarks
- MMLU: 85 — General knowledge & reasoning (5-shot)
- HumanEval: 80 — Code generation (pass@1)
- MATH: 89 — Competition-level math reasoning
Loading interactive analysis...