RUN THIS LLM

Search local LLM hardware requirements

← Back to all models

CodeLlama 34B

Meta · 34B · Code

Large code model excelling at code review, refactoring, and multi-file understanding.

VRAM Requirements

QuantizationVRAM
Q4_K_M (smallest)20.4 GB
Q8_0 (balanced)37.4 GB
FP16 (full quality)68 GB

Specifications

Benchmarks

Loading interactive analysis...

RTL
Add Run This LLM to your home screen
Get the Chrome Extension
Look up hardware requirements for any model right from your browser sidebar — no tab switching needed.
Add to Chrome — It's Free