xAI: Grok Code Fast 1
x-ai/grok-code-fast-1
Access Grok Code Fast 1 from xAI using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "x-ai/grok-code-fast-1"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "x-ai/grok-code-fast-1"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="x-ai/grok-code-fast-1",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "x-ai/grok-code-fast-1",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Grok Code Fast 1 is a speedy, economical reasoning model built from scratch specifically for agentic coding workflows, released August 2025. It excels at TypeScript, Python, Java, Rust, C++, and Go with a 256K context window and ~92 tokens/second throughput.
Context Window 256K
tokens
Max Output 256K
tokens
Input Cost $0.2
per million tokens
Output Cost $1.5
per million tokens
Input text
modalities
Tool Use Yes
Knowledge Cutoff Oct 2023
Release Date Aug 28, 2025
Output Speed 185
tokens / sec
Latency 5.22s
time to first token
Model Playground
Try Grok Code Fast 1 instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How Grok Code Fast 1 performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 72.7% |
| Humanity's Last Exam Cross-domain reasoning | 7.5% |
| LiveCodeBench Recent coding problems | 65.7% |
| SciCode Scientific programming | 36.2% |
| AIME 2025 Advanced math exam | 43.3% |
| IFBench Instruction following | 41.4% |
| LCR Long-context reasoning | 48.3% |
| Terminal-Bench Hard Agentic terminal tasks | 17.4% |
| τ²-Bench Tool use / agents | 75.7% |
Scores sourced from Artificial Analysis.
Find other xAI models →
Grok 4.20
Grok 4.20 is xAI's flagship large language model, offering a rare combination of low hallucination rates and high throughput at competitive pricing. It achieved a record 78% non-hallucination rate on the Artificial Analysis Omniscience benchmark — the highest of any model tested — making it a strong choice for applications where factual reliability matters more than peak reasoning scores. It scored 78.5% on GPQA Diamond and 87.3% on MATH-500. The model supports a 2M-token context window, text and image inputs, parallel function calling, structured outputs, and built-in web search. Reasoning can be toggled on or off per request via API parameter. At $2 per million input tokens and $6 per million output tokens, it's one of the most affordable frontier models available, with output speeds exceeding 230 tokens per second.
ChatGrok 4.20 Multi-Agent
Grok 4.20 Multi-Agent is a variant of xAI's Grok 4.20 purpose-built for orchestrating multiple AI agents that collaborate on complex, multi-step tasks in real time. Rather than relying on a single inference pass, it coordinates parallel agents that independently search, analyze, and cross-reference information before synthesizing a final response. At low or medium reasoning effort it runs 4 agents; at high or extra-high effort it scales to 16. It scored a 68.7 agentic index on Artificial Analysis — among the highest available. The model shares Grok 4.20's 2M-token context window and natively supports web search, X search, and tool orchestration. It generates up to 2M output tokens per response, making it well suited for deep research workflows, multi-source analysis, and long-running agent pipelines.
ChatGrok 4.1 Fast
Grok 4.1 Fast is xAI's best tool-calling model released November 2025, featuring a 2M context window and halved hallucination rates versus Grok 4 Fast. It comes in reasoning and non-reasoning modes and is optimized for agentic workflows with native support for web search, X search, and code execution.
Frequently Asked Questions
You can access Grok Code Fast 1 by xAI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Grok Code Fast 1 to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.2 |
| Output | $1.5 |
Grok Code Fast 1 was created by xAI and released on Aug 28, 2025.
Grok Code Fast 1 supports a context window of 256K tokens. For reference, that is roughly equivalent to 512 pages of text.
Grok Code Fast 1 can generate up to 256K tokens in a single response.
Grok Code Fast 1 has a knowledge cutoff date of Oct 2023. This means the model was trained on data available up to that date.
Grok Code Fast 1 accepts the following input types: text. It produces: text.
Yes, Grok Code Fast 1 supports tool use (function calling), allowing it to interact with external tools, APIs, and data sources as part of its response flow.
Yes — the Grok Code Fast 1 API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Grok Code Fast 1 to your app without worrying about API keys or setup.
Read the Docs View Tutorials