DeepSeek: DeepSeek V4 Pro
deepseek/deepseek-v4-pro
Access DeepSeek V4 Pro from DeepSeek using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "deepseek/deepseek-v4-pro"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "deepseek/deepseek-v4-pro"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="deepseek/deepseek-v4-pro",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "deepseek/deepseek-v4-pro",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
DeepSeek V4 Pro is a 1.6T-parameter Mixture-of-Experts model from DeepSeek with 49B parameters activated per token, supporting a 1M-token context window. It is positioned as the strongest open-weight model currently available.
V4 Pro leads all open-source models in math, coding, and STEM reasoning. On LiveCodeBench it scores 93.5, ahead of Gemini 3.1 Pro (91.7) and Claude Opus 4.6 (88.8). Its Codeforces rating of 3206 also tops GPT-5.4 (3168). On agentic tool-use benchmarks like MCPAtlas, it reaches near-parity with Opus 4.6. DeepSeek acknowledges it trails GPT-5.4 and Gemini 3.1 Pro overall by roughly 3–6 months of frontier development.
Priced at $1.74/M input and $3.48/M output — a fraction of comparable closed-source models — it's a strong pick for complex reasoning, agentic coding, and knowledge-intensive tasks.
Context Window 1M
tokens
Max Output 384K
tokens
Input Cost $1.74
per million tokens
Output Cost $3.48
per million tokens
Release Date Apr 24, 2026
Output Speed 36
tokens / sec
Latency 2.08s
time to first token
Model Playground
Try DeepSeek V4 Pro instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How DeepSeek V4 Pro performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 88.8% |
| Humanity's Last Exam Cross-domain reasoning | 35.9% |
| SciCode Scientific programming | 50.0% |
| IFBench Instruction following | 76.5% |
| LCR Long-context reasoning | 66.3% |
| Terminal-Bench Hard Agentic terminal tasks | 46.2% |
| τ²-Bench Tool use / agents | 96.2% |
Scores sourced from Artificial Analysis.
Find other DeepSeek models →
DeepSeek V4 Flash
DeepSeek V4 Flash is a lightweight, efficiency-focused Mixture-of-Experts model from DeepSeek, with 284B total parameters and 13B activated per token. It supports a 1M-token context window and configurable reasoning modes (standard, high, and max thinking effort). Designed as the fast and economical option in the V4 family, Flash delivers reasoning capabilities that closely approach the larger V4 Pro, and performs on par with it on simpler agentic tasks. In its max reasoning mode, it achieves comparable reasoning scores to Pro when given a larger thinking budget. At $0.14/M input and $0.28/M output tokens, it's one of the cheapest frontier-tier models available — well suited for high-throughput workloads like coding assistants, chat systems, and agent pipelines where latency and cost matter most.
ChatDeepSeek V3.2
DeepSeek V3.2 is the December 2025 flagship model featuring DeepSeek Sparse Attention for efficiency and massive reinforcement learning post-training, achieving GPT-5-level performance. It's the first DeepSeek model to integrate thinking directly into tool-use and excels at agentic AI tasks.
ChatDeepSeek V3.2 Speciale
DeepSeek V3.2-Speciale is a high-compute variant designed exclusively for maximum reasoning accuracy, achieving gold-medal performance in IMO 2025, IOI 2025, and ICPC World Finals. It rivals Gemini 3.0 Pro but requires higher token usage and doesn't support tool calling.
Frequently Asked Questions
You can access DeepSeek V4 Pro by DeepSeek through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add DeepSeek V4 Pro to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $1.74 |
| Output | $3.48 |
DeepSeek V4 Pro was created by DeepSeek and released on Apr 24, 2026.
DeepSeek V4 Pro supports a context window of 1M tokens. For reference, that is roughly equivalent to 2,097 pages of text.
DeepSeek V4 Pro can generate up to 384K tokens in a single response.
Yes — the DeepSeek V4 Pro API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add DeepSeek V4 Pro to your app without worrying about API keys or setup.
Read the Docs View Tutorials