Mistral AI: Mistral Medium 3.5
mistralai/mistral-medium-3-5
Access Mistral Medium 3.5 from Mistral AI using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "mistralai/mistral-medium-3-5"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "mistralai/mistral-medium-3-5"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="mistralai/mistral-medium-3-5",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "mistralai/mistral-medium-3-5",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Mistral Medium 3.5 is a dense 128-billion-parameter multimodal model from Mistral AI that unifies instruction-following, reasoning, and coding into a single set of weights.
It features a 256k-token context window, native function calling, structured JSON output, and vision capabilities via a custom-trained encoder that handles variable image sizes. A per-request reasoning_effort parameter lets you toggle between fast responses and deeper chain-of-thought processing, making the same model suitable for quick chat replies and complex agentic workflows.
On benchmarks, it scores 77.6% on SWE-Bench Verified and 91.4% on τ³-Telecom. It replaces Mistral's previous Medium 3.1, Magistral, and Devstral 2 models. Priced at $1.50 per million input tokens and $7.50 per million output tokens, it's a strong fit for developers building tool-calling agents, long-horizon coding tasks, and multi-step automation pipelines.
Context Window 262K
tokens
Max Output N/A
tokens
Input Cost $1.5
per million tokens
Output Cost $7.5
per million tokens
Release Date Apr 30, 2026
Output Speed 163
tokens / sec
Latency 0.56s
time to first token
Model Playground
Try Mistral Medium 3.5 instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How Mistral Medium 3.5 performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 74.8% |
| Humanity's Last Exam Cross-domain reasoning | 12.8% |
| SciCode Scientific programming | 39.6% |
| IFBench Instruction following | 68.8% |
| LCR Long-context reasoning | 61.0% |
| Terminal-Bench Hard Agentic terminal tasks | 33.3% |
| τ²-Bench Tool use / agents | 94.2% |
Scores sourced from Artificial Analysis.
Find other Mistral AI models →
Mistral Small 4
Mistral Small 4 is a 119B-parameter open-source Mixture-of-Experts model (6B active per token) released under Apache 2.0, unifying instruction-following, reasoning, multimodal (text + image), and agentic coding into a single deployment. It features 128 experts, a 256k context window, and configurable reasoning effort that lets developers toggle between fast responses and deep step-by-step reasoning per request. Compared to its predecessor Mistral Small 3, it delivers 40% lower latency and 3x higher throughput while matching or surpassing GPT-OSS 120B on key benchmarks.
ChatMinistral 14B
Ministral 14B is part of the Ministral 3 family, a 14B parameter multimodal model with vision capabilities under Apache 2.0. It offers advanced capabilities for local deployment with instruct, base, and reasoning variants achieving 85% on AIME'25.
ChatDevstral 2
Devstral 2 is a 123B parameter dense transformer coding model achieving 72.2% on SWE-bench Verified with 256K context. Released under modified MIT license, it's the state-of-the-art open model for code agents, 7x more cost-efficient than Claude Sonnet.
Frequently Asked Questions
You can access Mistral Medium 3.5 by Mistral AI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Mistral Medium 3.5 to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $1.5 |
| Output | $7.5 |
Mistral Medium 3.5 was created by Mistral AI and released on Apr 30, 2026.
Mistral Medium 3.5 supports a context window of 262K tokens. For reference, that is roughly equivalent to 524 pages of text.
Mistral Medium 3.5 scores 39.2 on the Artificial Analysis Intelligence Index, outperforming 84% of tracked models. On coding, it scores 35.4 (outperforms 81% of models).
Yes — the Mistral Medium 3.5 API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Mistral Medium 3.5 to your app without worrying about API keys or setup.
Read the Docs View Tutorials