DeepSeek

DeepSeek: DeepSeek V4 Pro

deepseek/deepseek-v4-pro

Access DeepSeek V4 Pro from DeepSeek using Puter.js AI API.

Get Started
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "deepseek/deepseek-v4-pro"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "deepseek/deepseek-v4-pro"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="deepseek/deepseek-v4-pro",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "deepseek/deepseek-v4-pro",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

Model Card

DeepSeek V4 Pro is a 1.6T-parameter Mixture-of-Experts model from DeepSeek with 49B parameters activated per token, supporting a 1M-token context window. It is positioned as the strongest open-weight model currently available.

V4 Pro leads all open-source models in math, coding, and STEM reasoning. On LiveCodeBench it scores 93.5, ahead of Gemini 3.1 Pro (91.7) and Claude Opus 4.6 (88.8). Its Codeforces rating of 3206 also tops GPT-5.4 (3168). On agentic tool-use benchmarks like MCPAtlas, it reaches near-parity with Opus 4.6. DeepSeek acknowledges it trails GPT-5.4 and Gemini 3.1 Pro overall by roughly 3–6 months of frontier development.

Priced at $1.74/M input and $3.48/M output — a fraction of comparable closed-source models — it's a strong pick for complex reasoning, agentic coding, and knowledge-intensive tasks.

Context Window 1M

tokens

Max Output 384K

tokens

Input Cost $1.74

per million tokens

Output Cost $3.48

per million tokens

Release Date Apr 24, 2026

 

Output Speed 36

tokens / sec

Latency 2.08s

time to first token

Model Playground

Try DeepSeek V4 Pro instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.

Chat deepseek/deepseek-v4-pro
DeepSeek
Chat with DeepSeek V4 Pro
Powered by Puter.js

Benchmarks

How DeepSeek V4 Pro performs on standard evaluations.

Artificial Analysis
Intelligence Index
51.5
Better than 97% of tracked models
Artificial Analysis
Coding Index
47.5
Better than 95% of tracked models
BenchmarkScore
GPQA Diamond Graduate-level science Q&A
88.8%
Humanity's Last Exam Cross-domain reasoning
35.9%
SciCode Scientific programming
50.0%
IFBench Instruction following
76.5%
LCR Long-context reasoning
66.3%
Terminal-Bench Hard Agentic terminal tasks
46.2%
τ²-Bench Tool use / agents
96.2%

Scores sourced from Artificial Analysis.

Frequently Asked Questions

How do I use DeepSeek V4 Pro?

You can access DeepSeek V4 Pro by DeepSeek through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is DeepSeek V4 Pro free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add DeepSeek V4 Pro to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for DeepSeek V4 Pro?
DeepSeek V4 Pro costs $1.74 per 1M input tokens and $3.48 per 1M output tokens.
Price per 1M tokens
Input$1.74
Output$3.48
Who created DeepSeek V4 Pro?

DeepSeek V4 Pro was created by DeepSeek and released on Apr 24, 2026.

What is the context window of DeepSeek V4 Pro?

DeepSeek V4 Pro supports a context window of 1M tokens. For reference, that is roughly equivalent to 2,097 pages of text.

What is the max output length of DeepSeek V4 Pro?

DeepSeek V4 Pro can generate up to 384K tokens in a single response.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the DeepSeek V4 Pro API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add DeepSeek V4 Pro to your app without worrying about API keys or setup.

Read the Docs View Tutorials