// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "qwen/qvq-max"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "qwen/qvq-max"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="qwen/qvq-max",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "qwen/qvq-max",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
QVQ Max is Alibaba's flagship visual reasoning model, built by the Qwen team to combine deep multimodal understanding with rigorous logical inference.
Unlike standard vision-language models, QVQ Max is designed to think through what it sees — analyzing charts, diagrams, math problems, and everyday images step by step before responding. It scores 70.3% on MMMU and 71.4% on MathVista (mini), placing it among the top multimodal reasoning models available via API. The model handles text and image inputs across a 131K token context window and supports tool calling for agentic workflows.
Ideal for developers building tutoring tools, visual data analysis pipelines, document understanding systems, or any application that requires both image comprehension and structured reasoning.
Context Window 131K
tokens
Max Output 8K
tokens
Input Cost $1.2
per million tokens
Output Cost $4.8
per million tokens
Input text, image
modalities
Tool Use Yes
Knowledge Cutoff Apr 2024
Release Date Mar 25, 2025
Model Playground
Try QVQ Max instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
More AI Models From Qwen
Qwen3.6 Flash
Qwen3.6 Flash is the speed-optimized tier of Alibaba's Qwen3.6 model family, designed for high-throughput, low-latency inference pipelines. It sits alongside Qwen3.6 Max Preview, Plus, and 35B-A3B in the product lineup, targeting use cases where fast response times matter more than peak benchmark scores. Like other Qwen3.6 models, it builds on a hybrid architecture combining linear attention with sparse mixture-of-experts routing. It is best suited for high-volume production workloads such as classification, extraction, summarization, and lightweight agent tasks where latency and cost efficiency are the primary constraints.
ChatQwen3.5 Plus 2026-04-20
Qwen3.5 Plus is a proprietary hosted model from Alibaba, built on the Qwen3.5-397B-A17B Mixture-of-Experts architecture with 397 billion total parameters and 17 billion active per token. Its headline feature is a 1-million-token native context window — among the largest available via API — making it well suited for processing entire codebases, long documents, or extended multi-turn conversations in a single request. It supports both a deep-thinking mode and an "Auto" mode that adaptively invokes tools like web search and code interpreters. This April 20, 2026 snapshot reflects ongoing improvements to the model since its original February 2026 launch. The Qwen3.5 series demonstrated strong multimodal performance across reasoning, coding, and vision tasks. A solid general-purpose option for developers needing large-context capabilities without migrating to the newer Qwen3.6 line.
ChatQwen3.6 27B
Qwen3.6 27B is a dense 27-billion-parameter multimodal model from Alibaba's Qwen team, purpose-built for agentic coding and repository-level reasoning. It scores 77.2% on SWE-bench Verified and 59.3% on Terminal-Bench 2.0, outperforming the previous-generation Qwen3.5-397B-A17B across all major coding benchmarks despite being far smaller. It natively supports text, image, and video inputs with a 262K-token context window, extendable to 1M tokens. A standout feature is Thinking Preservation, which retains reasoning traces across conversation turns — reducing redundant computation in multi-step agent loops. The model uses a hybrid attention architecture combining Gated DeltaNet with traditional self-attention. Ideal for developers building coding agents, multi-turn tool-use workflows, or frontend generation pipelines.
Frequently Asked Questions
You can access QVQ Max by Qwen through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add QVQ Max to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $1.2 |
| Output | $4.8 |
QVQ Max was created by Qwen and released on Mar 25, 2025.
QVQ Max supports a context window of 131K tokens. For reference, that is roughly equivalent to 262 pages of text.
QVQ Max can generate up to 8K tokens in a single response.
QVQ Max has a knowledge cutoff date of Apr 2024. This means the model was trained on data available up to that date.
QVQ Max accepts the following input types: text, image. It produces: text.
Yes, QVQ Max supports tool use (function calling), allowing it to interact with external tools, APIs, and data sources as part of its response flow.
Yes — the QVQ Max API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add QVQ Max to your app without worrying about API keys or setup.
Read the Docs View Tutorials