MiniMax: MiniMax M2
minimax/minimax-m2
Access MiniMax M2 from MiniMax using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "minimax/minimax-m2"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "minimax/minimax-m2"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="minimax/minimax-m2",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "minimax/minimax-m2",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
MiniMax-M2 is a compact MoE model (230B total, 10B active parameters) optimized for coding and agentic workflows with a 128K context window. It ranks #1 among open-source models for tool use and agent tasks, delivering elite performance in multi-step development workflows at 8% the cost of comparable models.
Context Window 197K
tokens
Max Output 197K
tokens
Input Cost $0.26
per million tokens
Output Cost $1
per million tokens
Release Date Sep 1, 2025
Output Speed 64
tokens / sec
Latency 2.25s
time to first token
Model Playground
Try MiniMax M2 instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How MiniMax M2 performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 77.7% |
| Humanity's Last Exam Cross-domain reasoning | 12.5% |
| LiveCodeBench Recent coding problems | 82.6% |
| SciCode Scientific programming | 36.1% |
| AIME 2025 Advanced math exam | 78.3% |
| IFBench Instruction following | 72.3% |
| LCR Long-context reasoning | 61.0% |
| Terminal-Bench Hard Agentic terminal tasks | 25.8% |
| τ²-Bench Tool use / agents | 86.8% |
Scores sourced from Artificial Analysis.
Find other MiniMax models →
MiniMax M2.7
MiniMax M2.7 is a proprietary reasoning LLM from Chinese AI startup MiniMax, released on March 18, 2026, notable for being one of the first commercial models to actively participate in its own training through autonomous self-evolution loops. It excels at agentic coding workflows with a 56.2% score on SWE-Pro and strong performance in office productivity tasks, scoring the highest ELO (1495) on GDPval-AA among open-source-tier models. It targets developers building complex agent systems and automated workflows.
ChatMiniMax M2.5
MiniMax M2.5 is a 230B-parameter Mixture-of-Experts model (10B active) from Shanghai-based MiniMax, designed for real-world productivity with state-of-the-art performance in coding (80.2% SWE-Bench Verified), agentic tool use, and search tasks. It rivals top models from Anthropic and OpenAI while costing 1/10th to 1/20th the price, positioning itself as frontier intelligence 'too cheap to meter.' The model excels at full-stack development, office work (Word, Excel, PowerPoint), and autonomous agent workflows.
ChatMiniMax M2-her
MiniMax M2-her is a dialogue-first large language model built for immersive roleplay, character-driven chat, and expressive multi-turn conversations. It stays consistent in tone and personality across conversations and supports rich message roles to learn from example dialogue. This makes it well-suited for storytelling, AI companions, and conversational experiences where natural flow matters.
Frequently Asked Questions
You can access MiniMax M2 by MiniMax through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add MiniMax M2 to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.26 |
| Output | $1 |
MiniMax M2 was created by MiniMax and released on Sep 1, 2025.
MiniMax M2 supports a context window of 197K tokens. For reference, that is roughly equivalent to 393 pages of text.
MiniMax M2 can generate up to 197K tokens in a single response.
Yes — the MiniMax M2 API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add MiniMax M2 to your app without worrying about API keys or setup.
Read the Docs View Tutorials