Baidu: Qianfan CoBuddy
baidu/cobuddy:free
Access Qianfan CoBuddy from Baidu using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "baidu/cobuddy:free"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "baidu/cobuddy:free"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="baidu/cobuddy:free",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "baidu/cobuddy:free",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
CoBuddy is a code generation model from Baidu, released through the Qianfan platform and optimized for coding tasks and AI agent workflows.
The model offers native support for both tool calling and reasoning, making it a strong fit for agentic use cases where the model needs to plan, invoke tools, and iterate. It provides a 131K token context window with up to 65K output tokens, giving it ample room for large codebases and extended generation.
CoBuddy is engineered for high inference throughput and low end-to-end latency. It's a solid choice for developers building code-centric agents or assistive coding tools who need responsive performance alongside structured tool use.
Context Window 131K
tokens
Max Output 66K
tokens
Input Cost $0
per million tokens
Output Cost $0
per million tokens
Release Date May 6, 2026
Model Playground
Try Qianfan CoBuddy instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
More AI Models From Baidu
Qianfan OCR Fast
Qianfan OCR Fast is a document intelligence model from Baidu's Qianfan team, purpose-built for optical character recognition tasks. It is an upgraded variant of the base Qianfan-OCR, trained on specialized OCR data while retaining general multimodal capabilities. The underlying Qianfan-OCR architecture is a 4B-parameter end-to-end vision-language model that replaces traditional multi-stage OCR pipelines with a single model handling document parsing, layout analysis, table extraction, chart understanding, key information extraction, and document QA. It performs direct image-to-Markdown conversion and supports 192 languages. The base model scored 93.12 on OmniDocBench v1.5 and 79.8 on OlmOCR Bench, leading all end-to-end models on both. Qianfan OCR Fast offers a 65K-token context window and is well suited for developers building document processing pipelines — invoice parsing, report extraction, exam grading, or RAG over scanned documents.
ChatERNIE 4.5 21B A3B
ERNIE 4.5 21B A3B is a lightweight text-only language model from Baidu using a Mixture-of-Experts architecture with 21B total parameters but only 3B active per token. It excels at general language understanding, generation, reasoning, and coding tasks while remaining computationally efficient. Released under Apache 2.0, it achieves competitive performance against larger models like Qwen3-30B-A3B despite having 30% fewer total parameters.
ChatERNIE 4.5 21B A3B Thinking
ERNIE 4.5 21B A3B Thinking is Baidu's reasoning-enhanced language model built on the 21B A3B architecture with explicit chain-of-thought capabilities. It activates only 3B of its 21B parameters per token while specializing in logic, mathematics, coding, and multi-step reasoning tasks. The model supports extended context up to 131K tokens and is optimized for complex problem-solving through structured thinking.
Frequently Asked Questions
You can access Qianfan CoBuddy by Baidu through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Qianfan CoBuddy to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0 |
| Output | $0 |
Qianfan CoBuddy was created by Baidu and released on May 6, 2026.
Qianfan CoBuddy supports a context window of 131K tokens. For reference, that is roughly equivalent to 262 pages of text.
Qianfan CoBuddy can generate up to 66K tokens in a single response.
Yes — the Qianfan CoBuddy API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Qianfan CoBuddy to your app without worrying about API keys or setup.
Read the Docs View Tutorials