Mistral AI: Mixtral 8x22B Instruct
mistralai/mixtral-8x22b-instruct
Access Mixtral 8x22B Instruct from Mistral AI using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "mistralai/mixtral-8x22b-instruct"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "mistralai/mixtral-8x22b-instruct"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="mistralai/mixtral-8x22b-instruct",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "mistralai/mixtral-8x22b-instruct",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Mixtral 8x22B is a sparse MoE model with 141B total / 39B active parameters, 64K context, and native function calling. It outperforms Llama 2 70B and matches GPT-3.5 while being cost-efficient under Apache 2.0.
Context Window 66K
tokens
Max Output N/A
tokens
Input Cost $2
per million tokens
Output Cost $6
per million tokens
Release Date Apr 17, 2024
Model Playground
Try Mixtral 8x22B Instruct instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How Mixtral 8x22B Instruct performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 33.2% |
| Humanity's Last Exam Cross-domain reasoning | 4.1% |
| LiveCodeBench Recent coding problems | 14.8% |
| SciCode Scientific programming | 18.8% |
| MATH-500 Competition math | 54.5% |
| AIME 2024 Advanced math exam | 0.0% |
Scores sourced from Artificial Analysis.
Find other Mistral AI models →
Mistral Small 4
Mistral Small 4 is a 119B-parameter open-source Mixture-of-Experts model (6B active per token) released under Apache 2.0, unifying instruction-following, reasoning, multimodal (text + image), and agentic coding into a single deployment. It features 128 experts, a 256k context window, and configurable reasoning effort that lets developers toggle between fast responses and deep step-by-step reasoning per request. Compared to its predecessor Mistral Small 3, it delivers 40% lower latency and 3x higher throughput while matching or surpassing GPT-OSS 120B on key benchmarks.
ChatMistral Small Creative
Mistral Small Creative is a specialized Labs model variant optimized for creative content generation. It builds on the Mistral Small architecture with adjustments for more imaginative and varied outputs in writing tasks.
ChatMinistral 14B
Ministral 14B is part of the Ministral 3 family, a 14B parameter multimodal model with vision capabilities under Apache 2.0. It offers advanced capabilities for local deployment with instruct, base, and reasoning variants achieving 85% on AIME'25.
Frequently Asked Questions
You can access Mixtral 8x22B Instruct by Mistral AI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Mixtral 8x22B Instruct to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $2 |
| Output | $6 |
Mixtral 8x22B Instruct was created by Mistral AI and released on Apr 17, 2024.
Mixtral 8x22B Instruct supports a context window of 66K tokens. For reference, that is roughly equivalent to 131 pages of text.
Yes — the Mixtral 8x22B Instruct API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Mixtral 8x22B Instruct to your app without worrying about API keys or setup.
Read the Docs View Tutorials