Inception: Mercury 2
inception/mercury-2
Access Mercury 2 from Inception using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "inception/mercury-2"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "inception/mercury-2"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="inception/mercury-2",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "inception/mercury-2",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Mercury 2 is a diffusion-based reasoning language model from Inception Labs that refines all tokens in parallel rather than generating them sequentially, achieving over 1,000 tokens per second — roughly 5x faster than speed-optimized competitors like Claude Haiku and GPT-5 Mini at comparable quality.
On reasoning benchmarks, Mercury 2 scores 91.1 on AIME 2025 and 73.6 on GPQA. It also placed second on the Copilot Arena leaderboard for quality while ranking first for speed overall.
With a 128K context window, it is purpose-built for latency-sensitive applications — real-time assistants, high-throughput pipelines, and cost-conscious production workloads where reasoning capability matters.
Context Window 128K
tokens
Max Output 50K
tokens
Input Cost $0.25
per million tokens
Output Cost $0.75
per million tokens
Release Date Mar 4, 2026
Output Speed 894
tokens / sec
Latency 3.95s
time to first token
Model Playground
Try Mercury 2 instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How Mercury 2 performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 77.0% |
| Humanity's Last Exam Cross-domain reasoning | 15.5% |
| SciCode Scientific programming | 38.7% |
| IFBench Instruction following | 69.8% |
| LCR Long-context reasoning | 36.3% |
| Terminal-Bench Hard Agentic terminal tasks | 26.5% |
| τ²-Bench Tool use / agents | 70.8% |
Scores sourced from Artificial Analysis.
Frequently Asked Questions
You can access Mercury 2 by Inception through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Mercury 2 to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.25 |
| Output | $0.75 |
Mercury 2 was created by Inception and released on Mar 4, 2026.
Mercury 2 supports a context window of 128K tokens. For reference, that is roughly equivalent to 256 pages of text.
Mercury 2 can generate up to 50K tokens in a single response.
Yes — the Mercury 2 API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Mercury 2 to your app without worrying about API keys or setup.
Read the Docs View Tutorials