Baidu

Baidu API

Access Baidu instantly with Puter.js, and add AI to any app in a few lines of code without backend or API keys.

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain AI like I'm five!", {
    model: "baidu/ernie-4.5-300b-a47b"
}).then(response => {
    console.log(response);
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain AI like I'm five!", {
            model: "baidu/ernie-4.5-300b-a47b"
        }).then(response => {
            console.log(response);
        });
    </script>
</body>
</html>

List of Baidu Models

Chat

Qianfan CoBuddy

baidu/cobuddy:free

CoBuddy is a code generation model from Baidu, released through the Qianfan platform and optimized for coding tasks and AI agent workflows. The model offers native support for both tool calling and reasoning, making it a strong fit for agentic use cases where the model needs to plan, invoke tools, and iterate. It provides a 131K token context window with up to 65K output tokens, giving it ample room for large codebases and extended generation. CoBuddy is engineered for high inference throughput and low end-to-end latency. It's a solid choice for developers building code-centric agents or assistive coding tools who need responsive performance alongside structured tool use.

Chat

Qianfan OCR Fast

baidu/qianfan-ocr-fast:free

Qianfan OCR Fast is a document intelligence model from Baidu's Qianfan team, purpose-built for optical character recognition tasks. It is an upgraded variant of the base Qianfan-OCR, trained on specialized OCR data while retaining general multimodal capabilities. The underlying Qianfan-OCR architecture is a 4B-parameter end-to-end vision-language model that replaces traditional multi-stage OCR pipelines with a single model handling document parsing, layout analysis, table extraction, chart understanding, key information extraction, and document QA. It performs direct image-to-Markdown conversion and supports 192 languages. The base model scored 93.12 on OmniDocBench v1.5 and 79.8 on OlmOCR Bench, leading all end-to-end models on both. Qianfan OCR Fast offers a 65K-token context window and is well suited for developers building document processing pipelines — invoice parsing, report extraction, exam grading, or RAG over scanned documents.

Chat

ERNIE 4.5 21B A3B

baidu/ernie-4.5-21b-a3b

ERNIE 4.5 21B A3B is a lightweight text-only language model from Baidu using a Mixture-of-Experts architecture with 21B total parameters but only 3B active per token. It excels at general language understanding, generation, reasoning, and coding tasks while remaining computationally efficient. Released under Apache 2.0, it achieves competitive performance against larger models like Qwen3-30B-A3B despite having 30% fewer total parameters.

Chat

ERNIE 4.5 21B A3B Thinking

baidu/ernie-4.5-21b-a3b-thinking

ERNIE 4.5 21B A3B Thinking is Baidu's reasoning-enhanced language model built on the 21B A3B architecture with explicit chain-of-thought capabilities. It activates only 3B of its 21B parameters per token while specializing in logic, mathematics, coding, and multi-step reasoning tasks. The model supports extended context up to 131K tokens and is optimized for complex problem-solving through structured thinking.

Chat

ERNIE 4.5 300B A47B

baidu/ernie-4.5-300b-a47b

ERNIE 4.5 300B A47B is Baidu's flagship text-only large language model featuring 300B total parameters with 47B active per token via MoE architecture. It demonstrates state-of-the-art performance on instruction following and knowledge benchmarks like IFEval, SimpleQA, and ChineseSimpleQA. The model supports 131K context length and excels at text understanding, generation, reasoning, and coding.

Chat

ERNIE 4.5 VL 28B A3B

baidu/ernie-4.5-vl-28b-a3b

ERNIE 4.5 VL 28B A3B is a lightweight multimodal vision-language model with 28B total parameters but only 3B active per token. It processes both images and text simultaneously, enabling tasks like image comprehension, chart analysis, document understanding, and cross-modal reasoning. The model offers both thinking and non-thinking modes while matching performance of larger models like Qwen2.5-VL-32B.

Chat

ERNIE 4.5 VL 424B A47B

baidu/ernie-4.5-vl-424b-a47b

ERNIE 4.5 VL 424B A47B is Baidu's largest multimodal vision-language model with 424B total parameters and 47B active per token. It supports up to 131K context tokens and excels at visual reasoning, document/chart understanding, and visual question answering with both thinking and non-thinking modes. In thinking mode, it approaches or surpasses OpenAI o1 on reasoning benchmarks like MathVista, MMMU, and VisualPuzzle.

Frequently Asked Questions

What is this Baidu API about?

The Baidu API gives you access to models for AI chat. Through Puter.js, you can start using Baidu models instantly with zero setup or configuration.

Which Baidu models can I use?

Puter.js supports a variety of Baidu models, including Qianfan CoBuddy, Qianfan OCR Fast, ERNIE 4.5 21B A3B, and more. Find all AI models supported by Puter.js in the AI model list.

How much does it cost?

With the User-Pays model, users cover their own AI costs through their Puter account. This means you can build apps without worrying about infrastructure expenses.

What is Puter.js?

Puter.js is a JavaScript library that provides access to AI, storage, and other cloud services directly from a single API. It handles authentication, infrastructure, and scaling so you can focus on building your app.

Does this work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Baidu API through Puter.js works with any JavaScript framework, Node.js, or plain HTML. Just include the library and start building. See the documentation for more details.