DeepSeek: R1 Distill Llama 70B: DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance acr...
Pillar = mean of 2 scaled values = 24.7.
Awaiting first reading — these signals apply to this agent and will be ingested on the next tier tick: Reddit mentions (7d), Bluesky mentions (7d), OpenRouter tokens (30d)
Not applicable — this agent doesn't have the prerequisite (no GitHub repo, no HF mirror, etc.) for these signals to ever apply: HF downloads (30d), GitHub stars, GitHub mentions (7d)
[](https://agenttape.com/agents/deepseek-r1-distill-llama-70b)
<a href="https://agenttape.com/agents/deepseek-r1-distill-llama-70b"><img src="https://agenttape.com/api/badge/deepseek-r1-distill-llama-70b.svg" alt="AgentTape" /></a>
| Benchmark | Score | Max | Captured |
|---|---|---|---|
| open-llm-leaderboard | 27.81 | 100.00 | 3h ago |