RUN THIS LLM

Search local LLM hardware requirements

← Back to all models

CodeLlama 70B

Meta · 70B · Code

The most capable CodeLlama. Near-GPT-4 level code generation with strong debugging skills.

VRAM Requirements

QuantizationVRAM
Q4_K_M (smallest)42 GB
Q8_0 (balanced)77 GB
FP16 (full quality)140 GB

Specifications

Benchmarks

Loading interactive analysis...

RTL
Add Run This LLM to your home screen
Get the Chrome Extension
Look up hardware requirements for any model right from your browser sidebar — no tab switching needed.
Add to Chrome — It's Free