Qwen

Qwen: Qwen2.5-Omni 7B

qwen/qwen2-5-omni-7b

Access Qwen2.5-Omni 7B from Qwen using Puter.js AI API.

Get Started
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "qwen/qwen2-5-omni-7b"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "qwen/qwen2-5-omni-7b"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="qwen/qwen2-5-omni-7b",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "qwen/qwen2-5-omni-7b",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

Model Card

Qwen2.5-Omni 7B is Alibaba's end-to-end omni-modal model capable of perceiving text, images, audio, and video simultaneously while generating text and natural speech in real time.

Built on a Thinker-Talker architecture with TMRoPE (Time-aligned Multimodal RoPE) for synchronizing audio and video streams, the 7B model achieves strong benchmark results across all modalities. It ranked first on the MMAU audio understanding leaderboard, scored 59.2 on MMMU image reasoning (near GPT-4o-mini's 60.0), and achieved 64.3 on Video-MME for video understanding without subtitles. On OmniBench, which tests cross-modal integration, it reached 56.13%.

The model supports tool/function calling and targets developers building voice assistants, video analysis tools, and multimodal pipelines that require a single model to handle diverse input types.

Context Window 33K

tokens

Max Output 2K

tokens

Input Cost $0.1

per million tokens

Output Cost $0.4

per million tokens

Input text, image, audio, video

modalities

Tool Use Yes

 

Knowledge Cutoff Apr 2024

 

Release Date Dec 2024

 

Model Playground

Try Qwen2.5-Omni 7B instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.

Chat qwen/qwen2-5-omni-7b
Qwen
Chat with Qwen2.5-Omni 7B
Powered by Puter.js

More AI Models From Qwen

Find other Qwen models

Chat

Qwen3.6 Flash

Qwen3.6 Flash is the speed-optimized tier of Alibaba's Qwen3.6 model family, designed for high-throughput, low-latency inference pipelines. It sits alongside Qwen3.6 Max Preview, Plus, and 35B-A3B in the product lineup, targeting use cases where fast response times matter more than peak benchmark scores. Like other Qwen3.6 models, it builds on a hybrid architecture combining linear attention with sparse mixture-of-experts routing. It is best suited for high-volume production workloads such as classification, extraction, summarization, and lightweight agent tasks where latency and cost efficiency are the primary constraints.

Chat

Qwen3.5 Plus 2026-04-20

Qwen3.5 Plus is a proprietary hosted model from Alibaba, built on the Qwen3.5-397B-A17B Mixture-of-Experts architecture with 397 billion total parameters and 17 billion active per token. Its headline feature is a 1-million-token native context window — among the largest available via API — making it well suited for processing entire codebases, long documents, or extended multi-turn conversations in a single request. It supports both a deep-thinking mode and an "Auto" mode that adaptively invokes tools like web search and code interpreters. This April 20, 2026 snapshot reflects ongoing improvements to the model since its original February 2026 launch. The Qwen3.5 series demonstrated strong multimodal performance across reasoning, coding, and vision tasks. A solid general-purpose option for developers needing large-context capabilities without migrating to the newer Qwen3.6 line.

Chat

Qwen3.6 27B

Qwen3.6 27B is a dense 27-billion-parameter multimodal model from Alibaba's Qwen team, purpose-built for agentic coding and repository-level reasoning. It scores 77.2% on SWE-bench Verified and 59.3% on Terminal-Bench 2.0, outperforming the previous-generation Qwen3.5-397B-A17B across all major coding benchmarks despite being far smaller. It natively supports text, image, and video inputs with a 262K-token context window, extendable to 1M tokens. A standout feature is Thinking Preservation, which retains reasoning traces across conversation turns — reducing redundant computation in multi-step agent loops. The model uses a hybrid attention architecture combining Gated DeltaNet with traditional self-attention. Ideal for developers building coding agents, multi-turn tool-use workflows, or frontend generation pipelines.

Frequently Asked Questions

How do I use Qwen2.5-Omni 7B?

You can access Qwen2.5-Omni 7B by Qwen through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is Qwen2.5-Omni 7B free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Qwen2.5-Omni 7B to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for Qwen2.5-Omni 7B?
Qwen2.5-Omni 7B costs $0.1 per 1M input tokens and $0.4 per 1M output tokens.
Price per 1M tokens
Input$0.1
Output$0.4
Who created Qwen2.5-Omni 7B?

Qwen2.5-Omni 7B was created by Qwen and released on Dec 2024.

What is the context window of Qwen2.5-Omni 7B?

Qwen2.5-Omni 7B supports a context window of 33K tokens. For reference, that is roughly equivalent to 66 pages of text.

What is the max output length of Qwen2.5-Omni 7B?

Qwen2.5-Omni 7B can generate up to 2K tokens in a single response.

What is the knowledge cutoff of Qwen2.5-Omni 7B?

Qwen2.5-Omni 7B has a knowledge cutoff date of Apr 2024. This means the model was trained on data available up to that date.

What types of input can Qwen2.5-Omni 7B process?

Qwen2.5-Omni 7B accepts the following input types: text, image, audio, video. It produces: text.

Does Qwen2.5-Omni 7B support tool use (function calling)?

Yes, Qwen2.5-Omni 7B supports tool use (function calling), allowing it to interact with external tools, APIs, and data sources as part of its response flow.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Qwen2.5-Omni 7B API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Qwen2.5-Omni 7B to your app without worrying about API keys or setup.

Read the Docs View Tutorials