mistralai/mistral-large-2512
Model Card
Mistral Large 3 is a 675B parameter sparse MoE model (41B active) trained on 3000 H200 GPUs, representing Mistral's frontier open-weight multimodal model. It supports 256K context, native vision, and excels in agentic workflows and enterprise applications.
Context Window
N/A
tokens
Max Output
262,144
tokens
Input Cost
$0.5
per million tokens
Output Cost
$1.5
per million tokens
API Usage Example
Add Mistral Large 3 to your app with just a few lines of code.
No API keys, no backend, no configuration required.
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "mistralai/mistral-large-2512"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
Get started with Puter.js
Add Mistral Large 3 to your app without worrying about API keys or setup.
Read the Docs View Tutorials