Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    DeepSeek: R1 Distill Llama 70B

    deepseek/deepseek-r1-distill-llama-70b

    Created Jan 23, 2025131,072 context
    $0.03/M input tokens$0.13/M output tokens

    DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

    • AIME 2024 pass@1: 70.0
    • MATH-500 pass@1: 94.5
    • CodeForces Rating: 1633

    The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

    Providers for R1 Distill Llama 70B

    OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.