x-ai/grok-2-vision-1212
Model Card
Grok 2 Vision 1212 is xAI's updated multimodal vision model released December 2024, featuring improved accuracy, instruction-following, and multilingual capabilities over the original Grok 2 Vision. It combines advanced visual comprehension with text understanding, excelling at object recognition, style analysis, and document-based question answering with a 32K context window.
Context Window 33K
tokens
Max Output 33K
tokens
Input Cost $2
per million tokens
Output Cost $10
per million tokens
Input text, image
modalities
Tool Use Yes
Knowledge Cutoff Aug 2024
Release Date Aug 20, 2024
API Usage Example
Add Grok 2 Vision 1212 to your app with just a few lines of code.
No backend, no configuration required.
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "x-ai/grok-2-vision-1212"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "x-ai/grok-2-vision-1212"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="x-ai/grok-2-vision-1212",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "x-ai/grok-2-vision-1212",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
More AI Models From xAI
Grok 4.20 Beta
Grok 4.20 Beta is xAI's newest flagship model, featuring a native 4-agent collaboration system (Grok, Harper, Benjamin, Lucas) that reasons in parallel and debates internally before delivering a unified response. It introduces a rapid-learning architecture that improves weekly from real-world feedback, and builds on a ~3T parameter MoE backbone with up to 2M token context. It claims a 65% reduction in hallucinations over Grok 4.1 and strong gains in coding, math, and engineering reasoning.
ChatGrok 4.20 Multi-Agent Beta
Grok 4.20 Multi-Agent Beta is an API-specific variant of Grok 4.20 optimized for orchestrating multiple agents that collaborate on deep research tasks. It supports web search and X search tools natively, uses the same 2M token context window, and is designed for developer workflows requiring structured multi-agent collaboration.
ChatGrok 4.1 Fast
Grok 4.1 Fast is xAI's best tool-calling model released November 2025, featuring a 2M context window and halved hallucination rates versus Grok 4 Fast. It comes in reasoning and non-reasoning modes and is optimized for agentic workflows with native support for web search, X search, and code execution.
Frequently Asked Questions
You can access Grok 2 Vision 1212 by xAI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Grok 2 Vision 1212 to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $2 |
| Output | $10 |
Grok 2 Vision 1212 was created by xAI and released on Aug 20, 2024.
Grok 2 Vision 1212 supports a context window of 33K tokens. For reference, that is roughly equivalent to 66 pages of text.
Grok 2 Vision 1212 can generate up to 33K tokens in a single response.
Grok 2 Vision 1212 has a knowledge cutoff date of Aug 2024. This means the model was trained on data available up to that date.
Grok 2 Vision 1212 accepts the following input types: text, image. It produces: text.
Yes, Grok 2 Vision 1212 supports tool use (function calling), allowing it to interact with external tools, APIs, and data sources as part of its response flow.
Yes — the Grok 2 Vision 1212 API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Grok 2 Vision 1212 to your app without worrying about API keys or setup.
Read the Docs View Tutorials