Distilled LLaMA by DeepSeek, fast and optimized for real-world tasks
100K+
DeepSeek introduced its first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1, leveraging reinforcement learning to enhance reasoning performance, with DeepSeek-R1 achieving state-of-the-art results and open-sourcing multiple distilled models.
The models provided here are the distill-llama variants, which are llama based models that have been fine-tuned on the responses and reasoning output of the full DeepSeek-R1 model.
Deepseek-R1-Distill-Llama can help with:
| Attribute | Details |
|---|---|
| Provider | Deepseek |
| Architecture | llama |
| Cutoff date | May 2024ⁱ |
| Languages | English, Chinese |
| Tool calling | ✅ |
| Input modalities | Text |
| Output modalities | Text |
| License | MIT |
i: Estimated
| Model variant | Parameters | Quantization | Context window | VRAM¹ | Size |
|---|---|---|---|---|---|
ai/deepseek-r1-distill-llama:latestai/deepseek-r1-distill-llama:8B-Q4_K_M | 8B | IQ2_XXS/Q4_K_M | 131K tokens | 5.33 GiB | 4.58 GB |
ai/deepseek-r1-distill-llama:8B-Q4_0 | 8B | Q4_0 | 131K tokens | 5.09 GiB | 4.33 GB |
ai/deepseek-r1-distill-llama:8B-Q4_K_M | 8B | IQ2_XXS/Q4_K_M | 131K tokens | 5.33 GiB | 4.58 GB |
ai/deepseek-r1-distill-llama:8B-F16 | 8B | F16 | 131K tokens | 15.01 GiB | 14.96 GB |
ai/deepseek-r1-distill-llama:70B-Q4_0 | 70B | Q4_0 | 131K tokens | 38.73 GiB | 37.22 GB |
ai/deepseek-r1-distill-llama:70B-Q4_K_M | 70B | IQ2_XXS/Q4_K_M | 131K tokens | 41.11 GiB | 39.59 GB |
¹: VRAM estimated based on model characteristics.
latest→8B-Q4_K_M
First, pull the model:
docker model pull ai/deepseek-r1-distill-llama
Then run the model:
docker model run ai/deepseek-r1-distill-llama
For more information on Docker Model Runner, explore the documentation.
This model is sensitive to prompts. Few-shot prompting consistently degrades its performance. Therefore, we recommend you directly describe the problem and specify the output format using a zero-shot setting for optimal results.
| Category | Benchmark | DeepSeek R1 |
|---|---|---|
| English | ||
| MMLU (Pass@1) | 90.8 | |
| MMLU-Redux (EM) | 92.9 | |
| MMLU-Pro (EM) | ||
| DROP (3-shot F1) | ||
| IF-Eval (Prompt Strict) | ||
| GPQA-Diamond (Pass@1) | ||
| SimpleQA (Correct) | ||
| FRAMES (Acc.) | ||
| AlpacaEval2.0 (LC-winrate) | 87.6 | |
| ArenaHard (GPT-4-1106) | 92.3 | |
| Code | ||
| LiveCodeBench (Pass@1-COT) | 65.9 | |
| Codeforces (Percentile) | 96.3 | |
| Codeforces (Rating) | 2029 | |
| SWE Verified (Resolved) | 49.2 | |
| Aider-Polyglot (Acc.) | 53.3 | |
| Math | ||
| AIME 2024 (Pass@1) | 79 .8 | |
| MATH-500 (Pass@1) | 97.3 | |
| CNMO 2024 (Pass@1) | 78.8 | |
| Chinese | ||
| CLUEWSC (EM) | 92.8 | |
| C-Eval (EM) | 91.8 | |
| C-SimpleQA (Correct) | 63.7 |
Content type
Model
Digest
sha256:828b7874c…
Size
37.2 GB
Last updated
9 months ago
docker model pull ai/deepseek-r1-distill-llama:70B-Q4_0Pulls:
2,800
Last week