// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "cohere/command-a"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "cohere/command-a"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="cohere/command-a",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "cohere/command-a",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Command A is Cohere's flagship enterprise language model with 111 billion parameters and a 256K token context window, released in March 2025.
Built for complex agentic workflows, it leads on tool-use benchmarks including BFCL-v3 and Tau-bench, and performs on par with GPT-4o on MMLU and SQL tasks. It is particularly strong at multi-step tool calling — including knowing when not to invoke a tool, a critical quality for production agents.
Supporting 23 languages with 150% higher throughput than Command R+, it's a strong choice for developers building RAG pipelines, autonomous agents, or multilingual enterprise applications.
Context Window 256K
tokens
Max Output 8K
tokens
Input Cost $2.5
per million tokens
Output Cost $10
per million tokens
Release Date Mar 11, 2025
Output Speed 40
tokens / sec
Latency 0.73s
time to first token
Model Playground
Try Command A instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How Command A performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 52.7% |
| Humanity's Last Exam Cross-domain reasoning | 4.6% |
| LiveCodeBench Recent coding problems | 28.7% |
| SciCode Scientific programming | 28.1% |
| MATH-500 Competition math | 81.9% |
| AIME 2024 Advanced math exam | 9.7% |
| AIME 2025 Advanced math exam | 13.0% |
| IFBench Instruction following | 36.5% |
| LCR Long-context reasoning | 18.0% |
| Terminal-Bench Hard Agentic terminal tasks | 0.8% |
| τ²-Bench Tool use / agents | 15.2% |
Scores sourced from Artificial Analysis.
Find other Cohere models →
Command R7B (12-2024)
Command R7B is Cohere's smallest and fastest model in the R series, with 7 billion parameters and a 128K token context window. Despite its compact size, it ranked first among similarly-sized open-weights models on the HuggingFace Open LLM Leaderboard, leading across IFEval, BBH, GPQA, MuSR, and MMLU. It supports native tool use, multi-step agentic workflows, and RAG across 23 languages, with particular strength in code tasks including SQL and code translation. For API developers, it's the best option when latency and cost are priorities and a full-scale model isn't required.
ChatCommand R (08-2024)
Command R 08-2024 is a 32-billion-parameter generative language model from Cohere, optimized for complex reasoning, retrieval-augmented generation, multilingual tasks, and tool use across a 128K token context window. Compared to its predecessor, this version delivers approximately 50% higher throughput and 20% lower latency while showing competitive performance on math, code, and reasoning tasks. It supports 23 languages. For API developers, it is a practical mid-tier option that balances capability and cost — well-suited for question answering, summarization, and RAG-based applications.
ChatCommand R+ (08-2024)
Command R+ 08-2024 is Cohere's 104-billion-parameter enterprise-grade language model, updated in August 2024 with enhanced multi-step tool use, improved instruction following, and stronger structured data analysis. Benchmark scores include 80 on MMLU, 50 on HumanEval, and 88 on GSM8K. On public tool-use benchmarks, the Command R+ line has outperformed GPT-4-Turbo. It supports a 128K context window and 23 languages. Developers building complex pipelines that require reliable tool orchestration and citation-quality RAG will find it a strong fit for demanding agentic and enterprise use cases.
Frequently Asked Questions
You can access Command A by Cohere through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Command A to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $2.5 |
| Output | $10 |
Command A was created by Cohere and released on Mar 11, 2025.
Command A supports a context window of 256K tokens. For reference, that is roughly equivalent to 512 pages of text.
Command A can generate up to 8K tokens in a single response.
Yes — the Command A API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Command A to your app without worrying about API keys or setup.
Read the Docs View Tutorials