Qwen

Qwen: Qwen-Turbo

qwen/qwen-turbo

Access Qwen-Turbo from Qwen using Puter.js AI API.

Get Started
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "qwen/qwen-turbo"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "qwen/qwen-turbo"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="qwen/qwen-turbo",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "qwen/qwen-turbo",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

Model Card

Qwen Turbo is a fast, cost-effective API model with up to 1M context length, ideal for simple tasks requiring quick responses. It supports multiple languages and offers flexible tiered pricing.

Context Window 131K

tokens

Max Output 8K

tokens

Input Cost $0.03

per million tokens

Output Cost $0.13

per million tokens

Release Date Jan 27, 2025

 

Output Speed 69

tokens / sec

Latency 1.26s

time to first token

Model Playground

Try Qwen-Turbo instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.

Chat qwen/qwen-turbo
Qwen
Chat with Qwen-Turbo
Powered by Puter.js

Benchmarks

How Qwen-Turbo performs on standard evaluations.

Artificial Analysis
Intelligence Index
12.0
Better than 23% of tracked models
BenchmarkScore
GPQA Diamond Graduate-level science Q&A
41.0%
Humanity's Last Exam Cross-domain reasoning
4.2%
LiveCodeBench Recent coding problems
16.3%
SciCode Scientific programming
15.3%
MATH-500 Competition math
80.5%
AIME 2024 Advanced math exam
12.0%

Scores sourced from Artificial Analysis.

Find other Qwen models

Chat

Qwen3.6 Plus

Qwen 3.6 Plus is Alibaba's flagship large language model, built on a hybrid architecture combining linear attention with sparse mixture-of-experts routing for high throughput and scalability. It's optimized for agentic coding and complex multi-step workflows. On Terminal-Bench 2.0, it scores 61.6, surpassing Claude 4.5 Opus (59.3), while its 78.8 on SWE-bench Verified places it close behind. It also leads on MCPMark (48.2%) for tool-calling reliability. A native multimodal model, it handles text, images, and documents within a 1M-token context window with up to 65K output tokens. Notable features include always-on chain-of-thought reasoning, native function calling, and a preserve_thinking parameter that retains reasoning across multi-turn agent loops. A strong fit for developers building AI coding agents, terminal automation, and tool-using pipelines.

Chat

Qwen3.5-9B

Qwen 3.5 9B is a 9-billion parameter open-source multimodal model by Alibaba's Qwen Team, featuring a 262K native context window (extendable to ~1M tokens), support for text, image, and video input, and coverage of 201 languages. It uses a hybrid Gated DeltaNet architecture and outperforms much larger models like Qwen3-30B and OpenAI's gpt-oss-120B on key benchmarks including reasoning, vision, and document understanding.

Chat

Qwen3.5-122B-A10B

Qwen 3.5 122B (10B Active) is Alibaba's largest medium-sized MoE model, activating only 10B of its 122B total parameters per inference pass. It excels at agentic tasks like tool use and multi-step reasoning, leading the Qwen 3.5 lineup on benchmarks such as BFCL-V4 and BrowseComp. It supports 262K native context (extendable to 1M), native multimodal input, and 201 languages under Apache 2.0.

Frequently Asked Questions

How do I use Qwen-Turbo?

You can access Qwen-Turbo by Qwen through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is Qwen-Turbo free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Qwen-Turbo to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for Qwen-Turbo?
Pricing for Qwen-Turbo is based on the number of input and output tokens used per request.
Price per 1M tokens
Input$0.03
Output$0.13
Who created Qwen-Turbo?

Qwen-Turbo was created by Qwen and released on Jan 27, 2025.

What is the context window of Qwen-Turbo?

Qwen-Turbo supports a context window of 131K tokens. For reference, that is roughly equivalent to 262 pages of text.

What is the max output length of Qwen-Turbo?

Qwen-Turbo can generate up to 8K tokens in a single response.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Qwen-Turbo API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Qwen-Turbo to your app without worrying about API keys or setup.

Read the Docs View Tutorials