MiniMax: MiniMax M1
minimax/minimax-m1
Access MiniMax M1 from MiniMax using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "minimax/minimax-m1"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "minimax/minimax-m1"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="minimax/minimax-m1",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "minimax/minimax-m1",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
MiniMax-M1 is the world's first open-source hybrid-attention reasoning model, featuring a 1 million token context window and 80K reasoning output budget. It excels in software engineering, long-context tasks, and complex reasoning while being trained with an efficient CISPO reinforcement learning algorithm.
Context Window 1M
tokens
Max Output 40K
tokens
Input Cost $0.4
per million tokens
Output Cost $2.2
per million tokens
Release Date May 29, 2025
Model Playground
Try MiniMax M1 instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How MiniMax M1 performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 69.7% |
| Humanity's Last Exam Cross-domain reasoning | 8.2% |
| LiveCodeBench Recent coding problems | 71.1% |
| SciCode Scientific programming | 37.4% |
| MATH-500 Competition math | 98.0% |
| AIME 2024 Advanced math exam | 84.7% |
| AIME 2025 Advanced math exam | 61.0% |
| IFBench Instruction following | 41.8% |
| LCR Long-context reasoning | 54.3% |
| Terminal-Bench Hard Agentic terminal tasks | 3.0% |
| τ²-Bench Tool use / agents | 34.2% |
Scores sourced from Artificial Analysis.
Find other MiniMax models →
MiniMax M2.7
MiniMax M2.7 is a proprietary reasoning LLM from Chinese AI startup MiniMax, released on March 18, 2026, notable for being one of the first commercial models to actively participate in its own training through autonomous self-evolution loops. It excels at agentic coding workflows with a 56.2% score on SWE-Pro and strong performance in office productivity tasks, scoring the highest ELO (1495) on GDPval-AA among open-source-tier models. It targets developers building complex agent systems and automated workflows.
ChatMiniMax M2.5
MiniMax M2.5 is a 230B-parameter Mixture-of-Experts model (10B active) from Shanghai-based MiniMax, designed for real-world productivity with state-of-the-art performance in coding (80.2% SWE-Bench Verified), agentic tool use, and search tasks. It rivals top models from Anthropic and OpenAI while costing 1/10th to 1/20th the price, positioning itself as frontier intelligence 'too cheap to meter.' The model excels at full-stack development, office work (Word, Excel, PowerPoint), and autonomous agent workflows.
ChatMiniMax M2-her
MiniMax M2-her is a dialogue-first large language model built for immersive roleplay, character-driven chat, and expressive multi-turn conversations. It stays consistent in tone and personality across conversations and supports rich message roles to learn from example dialogue. This makes it well-suited for storytelling, AI companions, and conversational experiences where natural flow matters.
Frequently Asked Questions
You can access MiniMax M1 by MiniMax through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add MiniMax M1 to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.4 |
| Output | $2.2 |
MiniMax M1 was created by MiniMax and released on May 29, 2025.
MiniMax M1 supports a context window of 1M tokens. For reference, that is roughly equivalent to 2,000 pages of text.
MiniMax M1 can generate up to 40K tokens in a single response.
Yes — the MiniMax M1 API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add MiniMax M1 to your app without worrying about API keys or setup.
Read the Docs View Tutorials