DeepSeek

DeepSeek: DeepSeek V4 Flash

deepseek/deepseek-v4-flash

Access DeepSeek V4 Flash from DeepSeek using Puter.js AI API.

Get Started
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "deepseek/deepseek-v4-flash"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "deepseek/deepseek-v4-flash"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="deepseek/deepseek-v4-flash",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "deepseek/deepseek-v4-flash",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

Model Card

DeepSeek V4 Flash is a lightweight, efficiency-focused Mixture-of-Experts model from DeepSeek, with 284B total parameters and 13B activated per token. It supports a 1M-token context window and configurable reasoning modes (standard, high, and max thinking effort).

Designed as the fast and economical option in the V4 family, Flash delivers reasoning capabilities that closely approach the larger V4 Pro, and performs on par with it on simpler agentic tasks. In its max reasoning mode, it achieves comparable reasoning scores to Pro when given a larger thinking budget.

At $0.14/M input and $0.28/M output tokens, it's one of the cheapest frontier-tier models available — well suited for high-throughput workloads like coding assistants, chat systems, and agent pipelines where latency and cost matter most.

Context Window 1M

tokens

Max Output 384K

tokens

Input Cost $0.14

per million tokens

Output Cost $0.28

per million tokens

Release Date Apr 24, 2026

 

Output Speed 79

tokens / sec

Latency 0.87s

time to first token

Model Playground

Try DeepSeek V4 Flash instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.

Chat deepseek/deepseek-v4-flash
DeepSeek
Chat with DeepSeek V4 Flash
Powered by Puter.js

Benchmarks

How DeepSeek V4 Flash performs on standard evaluations.

Artificial Analysis
Intelligence Index
46.5
Better than 93% of tracked models
Artificial Analysis
Coding Index
38.7
Better than 87% of tracked models
BenchmarkScore
GPQA Diamond Graduate-level science Q&A
89.4%
Humanity's Last Exam Cross-domain reasoning
32.1%
SciCode Scientific programming
44.9%
IFBench Instruction following
79.2%
LCR Long-context reasoning
63.0%
Terminal-Bench Hard Agentic terminal tasks
35.6%
τ²-Bench Tool use / agents
95.0%

Scores sourced from Artificial Analysis.

Frequently Asked Questions

How do I use DeepSeek V4 Flash?

You can access DeepSeek V4 Flash by DeepSeek through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is DeepSeek V4 Flash free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add DeepSeek V4 Flash to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for DeepSeek V4 Flash?
DeepSeek V4 Flash costs $0.14 per 1M input tokens and $0.28 per 1M output tokens.
Price per 1M tokens
Input$0.14
Output$0.28
Who created DeepSeek V4 Flash?

DeepSeek V4 Flash was created by DeepSeek and released on Apr 24, 2026.

What is the context window of DeepSeek V4 Flash?

DeepSeek V4 Flash supports a context window of 1M tokens. For reference, that is roughly equivalent to 2,097 pages of text.

What is the max output length of DeepSeek V4 Flash?

DeepSeek V4 Flash can generate up to 384K tokens in a single response.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the DeepSeek V4 Flash API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add DeepSeek V4 Flash to your app without worrying about API keys or setup.

Read the Docs View Tutorials