xAI

xAI: Grok 3 Mini

x-ai/grok-3-mini

Access Grok 3 Mini from xAI using Puter.js AI API.

Get Started
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "x-ai/grok-3-mini"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "x-ai/grok-3-mini"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="x-ai/grok-3-mini",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "x-ai/grok-3-mini",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

Model Card

Grok 3 Mini is a lightweight, cost-efficient reasoning model that thinks before responding, ideal for logic-based tasks that don't require deep domain knowledge. It features configurable reasoning effort and exposes accessible thinking traces for transparency.

Context Window 131K

tokens

Max Output 131K

tokens

Input Cost $0.3

per million tokens

Output Cost $0.5

per million tokens

Input text

modalities

Tool Use Yes

 

Knowledge Cutoff Nov 2024

 

Release Date Feb 17, 2025

 

Output Speed 216

tokens / sec

Latency 0.41s

time to first token

Model Playground

Try Grok 3 Mini instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.

Chat x-ai/grok-3-mini
xAI
Chat with Grok 3 Mini
Powered by Puter.js

Benchmarks

How Grok 3 Mini performs on standard evaluations.

Artificial Analysis
Intelligence Index
32.1
Better than 77% of tracked models
Artificial Analysis
Coding Index
25.2
Better than 66% of tracked models
Artificial Analysis
Math Index
84.7
Better than 81% of tracked models
BenchmarkScore
GPQA Diamond Graduate-level science Q&A
79.1%
Humanity's Last Exam Cross-domain reasoning
11.1%
LiveCodeBench Recent coding problems
69.6%
SciCode Scientific programming
40.6%
MATH-500 Competition math
99.2%
AIME 2024 Advanced math exam
93.3%
AIME 2025 Advanced math exam
84.7%
IFBench Instruction following
45.9%
LCR Long-context reasoning
50.3%
Terminal-Bench Hard Agentic terminal tasks
17.4%
τ²-Bench Tool use / agents
90.4%

Scores sourced from Artificial Analysis.

Find other xAI models

Chat

Grok 4.20

Grok 4.20 is xAI's flagship large language model, offering a rare combination of low hallucination rates and high throughput at competitive pricing. It achieved a record 78% non-hallucination rate on the Artificial Analysis Omniscience benchmark — the highest of any model tested — making it a strong choice for applications where factual reliability matters more than peak reasoning scores. It scored 78.5% on GPQA Diamond and 87.3% on MATH-500. The model supports a 2M-token context window, text and image inputs, parallel function calling, structured outputs, and built-in web search. Reasoning can be toggled on or off per request via API parameter. At $2 per million input tokens and $6 per million output tokens, it's one of the most affordable frontier models available, with output speeds exceeding 230 tokens per second.

Chat

Grok 4.20 Multi-Agent

Grok 4.20 Multi-Agent is a variant of xAI's Grok 4.20 purpose-built for orchestrating multiple AI agents that collaborate on complex, multi-step tasks in real time. Rather than relying on a single inference pass, it coordinates parallel agents that independently search, analyze, and cross-reference information before synthesizing a final response. At low or medium reasoning effort it runs 4 agents; at high or extra-high effort it scales to 16. It scored a 68.7 agentic index on Artificial Analysis — among the highest available. The model shares Grok 4.20's 2M-token context window and natively supports web search, X search, and tool orchestration. It generates up to 2M output tokens per response, making it well suited for deep research workflows, multi-source analysis, and long-running agent pipelines.

Chat

Grok 4.1 Fast

Grok 4.1 Fast is xAI's best tool-calling model released November 2025, featuring a 2M context window and halved hallucination rates versus Grok 4 Fast. It comes in reasoning and non-reasoning modes and is optimized for agentic workflows with native support for web search, X search, and code execution.

Frequently Asked Questions

How do I use Grok 3 Mini?

You can access Grok 3 Mini by xAI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is Grok 3 Mini free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Grok 3 Mini to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for Grok 3 Mini?
Pricing for Grok 3 Mini is based on the number of input and output tokens used per request.
Price per 1M tokens
Input$0.3
Output$0.5
Who created Grok 3 Mini?

Grok 3 Mini was created by xAI and released on Feb 17, 2025.

What is the context window of Grok 3 Mini?

Grok 3 Mini supports a context window of 131K tokens. For reference, that is roughly equivalent to 262 pages of text.

What is the max output length of Grok 3 Mini?

Grok 3 Mini can generate up to 131K tokens in a single response.

What is the knowledge cutoff of Grok 3 Mini?

Grok 3 Mini has a knowledge cutoff date of Nov 2024. This means the model was trained on data available up to that date.

What types of input can Grok 3 Mini process?

Grok 3 Mini accepts the following input types: text. It produces: text.

Does Grok 3 Mini support tool use (function calling)?

Yes, Grok 3 Mini supports tool use (function calling), allowing it to interact with external tools, APIs, and data sources as part of its response flow.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Grok 3 Mini API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Grok 3 Mini to your app without worrying about API keys or setup.

Read the Docs View Tutorials