Mixtral 8x22B Instruct API
Access Mixtral 8x22B Instruct from Mistral AI using Puter.js AI API.
Get Startedmistralai/mixtral-8x22b-instruct
Model Card
Mixtral 8x22B is a sparse MoE model with 141B total / 39B active parameters, 64K context, and native function calling. It outperforms Llama 2 70B and matches GPT-3.5 while being cost-efficient under Apache 2.0.
Context Window
N/A
tokens
Max Output
N/A
tokens
Input Cost
$2
per million tokens
Output Cost
$6
per million tokens
API Usage Example
Add Mixtral 8x22B Instruct to your app with just a few lines of code.
No API keys, no backend, no configuration required.
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "mistralai/mixtral-8x22b-instruct"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
Get started with Puter.js
Add Mixtral 8x22B Instruct to your app without worrying about API keys or setup.
Read the Docs View Tutorials