DeepSeek: R1 Distill Llama 70B API
Access DeepSeek: R1 Distill Llama 70B from DeepSeek using Puter.js AI API.
Get Starteddeepseek/deepseek-r1-distill-llama-70b
Model Card
DeepSeek R1 Distill Llama 70B is a 70 billion parameter dense model fine-tuned from Llama 3.3-70B-Instruct using 800K reasoning samples generated by DeepSeek R1. It brings R1's reasoning capabilities to a more accessible size while maintaining strong performance on math and coding benchmarks.
Context Window
N/A
tokens
Max Output
131,072
tokens
Input Cost
$0.03
per million tokens
Output Cost
$0.11
per million tokens
API Usage Example
Add DeepSeek: R1 Distill Llama 70B to your app with just a few lines of code.
No API keys, no backend, no configuration required.
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "deepseek/deepseek-r1-distill-llama-70b"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
Get started with Puter.js
Add DeepSeek: R1 Distill Llama 70B to your app without worrying about API keys or setup.
Read the Docs View Tutorials