Qwen: Qwen3.6 Flash
qwen/qwen3.6-flash
Access Qwen3.6 Flash from Qwen using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "qwen/qwen3.6-flash"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "qwen/qwen3.6-flash"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="qwen/qwen3.6-flash",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "qwen/qwen3.6-flash",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Qwen3.6 Flash is the speed-optimized tier of Alibaba's Qwen3.6 model family, designed for high-throughput, low-latency inference pipelines.
It sits alongside Qwen3.6 Max Preview, Plus, and 35B-A3B in the product lineup, targeting use cases where fast response times matter more than peak benchmark scores. Like other Qwen3.6 models, it builds on a hybrid architecture combining linear attention with sparse mixture-of-experts routing.
It is best suited for high-volume production workloads such as classification, extraction, summarization, and lightweight agent tasks where latency and cost efficiency are the primary constraints.
Context Window 1M
tokens
Max Output 66K
tokens
Input Cost $0.25
per million tokens
Output Cost $1.5
per million tokens
Release Date Apr 27, 2026
Model Playground
Try Qwen3.6 Flash instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
More AI Models From Qwen
Qwen3.6 27B
Qwen3.6 27B is a dense 27-billion-parameter multimodal model from Alibaba's Qwen team, purpose-built for agentic coding and repository-level reasoning. It scores 77.2% on SWE-bench Verified and 59.3% on Terminal-Bench 2.0, outperforming the previous-generation Qwen3.5-397B-A17B across all major coding benchmarks despite being far smaller. It natively supports text, image, and video inputs with a 262K-token context window, extendable to 1M tokens. A standout feature is Thinking Preservation, which retains reasoning traces across conversation turns — reducing redundant computation in multi-step agent loops. The model uses a hybrid attention architecture combining Gated DeltaNet with traditional self-attention. Ideal for developers building coding agents, multi-turn tool-use workflows, or frontend generation pipelines.
ChatQwen3.6 Max Preview
Qwen3.6 Max Preview is Alibaba's most capable language model to date — a proprietary flagship that claimed the top score on six major coding benchmarks at its April 20, 2026 release. It leads on SWE-bench Pro, Terminal-Bench 2.0, SkillsBench, QwenClawBench, QwenWebBench, and SciCode. The Artificial Analysis Intelligence Index rates it at 52, well above the median for reasoning models in its price tier. It supports a 256K-token context window and is text-only at launch. As a preview release, Alibaba is still actively iterating on the model. Best suited for teams building coding agents, scientific computing tools, or frontend generation systems that need peak benchmark performance.
ChatQwen3.6 35B A3B
Qwen3.6 35B A3B is a sparse Mixture-of-Experts model with 35 billion total parameters but only 3 billion active per token, making it highly efficient for inference. Developed by Alibaba's Qwen team, it scores 73.4% on SWE-bench Verified and 51.5% on Terminal-Bench 2.0 — significantly outperforming dense models like Gemma 4-31B (52.0% on SWE-bench Verified). It natively handles text, image, and video with a 262K-token context window, extendable to 1M tokens. The model supports Thinking Preservation for stable multi-turn reasoning and includes native tool-calling capabilities. Released under Apache 2.0, it was the first open-weight model in the Qwen3.6 family. A strong choice for developers who want frontier-adjacent coding performance at a fraction of the compute cost of larger models.
Frequently Asked Questions
You can access Qwen3.6 Flash by Qwen through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Qwen3.6 Flash to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.25 |
| Output | $1.5 |
Qwen3.6 Flash was created by Qwen and released on Apr 27, 2026.
Qwen3.6 Flash supports a context window of 1M tokens. For reference, that is roughly equivalent to 2,000 pages of text.
Qwen3.6 Flash can generate up to 66K tokens in a single response.
Yes — the Qwen3.6 Flash API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Qwen3.6 Flash to your app without worrying about API keys or setup.
Read the Docs View Tutorials