xAI

xAI: Grok 2 Vision 1212

x-ai/grok-2-vision-1212

Access Grok 2 Vision 1212 from xAI using Puter.js AI API.

Get Started
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "x-ai/grok-2-vision-1212"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "x-ai/grok-2-vision-1212"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="x-ai/grok-2-vision-1212",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "x-ai/grok-2-vision-1212",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

Model Card

Grok 2 Vision 1212 is xAI's updated multimodal vision model released December 2024, featuring improved accuracy, instruction-following, and multilingual capabilities over the original Grok 2 Vision. It combines advanced visual comprehension with text understanding, excelling at object recognition, style analysis, and document-based question answering with a 32K context window.

Context Window 33K

tokens

Max Output 33K

tokens

Input Cost $2

per million tokens

Output Cost $10

per million tokens

Input text, image

modalities

Tool Use Yes

 

Knowledge Cutoff Aug 2024

 

Release Date Aug 20, 2024

 

Model Playground

Try Grok 2 Vision 1212 instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.

Chat x-ai/grok-2-vision-1212
xAI
Chat with Grok 2 Vision 1212
Powered by Puter.js

More AI Models From xAI

Find other xAI models

Chat

Grok 4.3

Grok 4.3 is xAI's latest flagship reasoning model, designed for agentic workflows, instruction following, and tasks demanding high factual accuracy. It accepts text and image inputs with always-on reasoning that cannot be disabled. The model supports a 1 million token context window with no output token limit, making it well suited for long-document analysis and multi-step agent tasks. Priced at $1.25 per million input tokens and $2.50 per million output tokens, it delivers improved cost-efficiency over its predecessor Grok 4.20 — scoring higher on the Artificial Analysis Intelligence Index while costing roughly 20% less to run. Grok 4.3 showed a major jump in real-world agentic task performance, gaining over 300 Elo points on GDPval-AA versus Grok 4.20. It also scores 98% on τ²-Bench Telecom and 81% on IFBench. A strong pick for developers building cost-sensitive agent systems that need reliable tool use and instruction adherence.

Chat

Grok 4.20

Grok 4.20 is xAI's flagship large language model, offering a rare combination of low hallucination rates and high throughput at competitive pricing. It achieved a record 78% non-hallucination rate on the Artificial Analysis Omniscience benchmark — the highest of any model tested — making it a strong choice for applications where factual reliability matters more than peak reasoning scores. It scored 78.5% on GPQA Diamond and 87.3% on MATH-500. The model supports a 2M-token context window, text and image inputs, parallel function calling, structured outputs, and built-in web search. Reasoning can be toggled on or off per request via API parameter. At $2 per million input tokens and $6 per million output tokens, it's one of the most affordable frontier models available, with output speeds exceeding 230 tokens per second.

Chat

Grok 4.20 Multi-Agent

Grok 4.20 Multi-Agent is a variant of xAI's Grok 4.20 purpose-built for orchestrating multiple AI agents that collaborate on complex, multi-step tasks in real time. Rather than relying on a single inference pass, it coordinates parallel agents that independently search, analyze, and cross-reference information before synthesizing a final response. At low or medium reasoning effort it runs 4 agents; at high or extra-high effort it scales to 16. It scored a 68.7 agentic index on Artificial Analysis — among the highest available. The model shares Grok 4.20's 2M-token context window and natively supports web search, X search, and tool orchestration. It generates up to 2M output tokens per response, making it well suited for deep research workflows, multi-source analysis, and long-running agent pipelines.

Frequently Asked Questions

How do I use Grok 2 Vision 1212?

You can access Grok 2 Vision 1212 by xAI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is Grok 2 Vision 1212 free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Grok 2 Vision 1212 to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for Grok 2 Vision 1212?
Grok 2 Vision 1212 costs $2 per 1M input tokens and $10 per 1M output tokens.
Price per 1M tokens
Input$2
Output$10
Who created Grok 2 Vision 1212?

Grok 2 Vision 1212 was created by xAI and released on Aug 20, 2024.

What is the context window of Grok 2 Vision 1212?

Grok 2 Vision 1212 supports a context window of 33K tokens. For reference, that is roughly equivalent to 66 pages of text.

What is the max output length of Grok 2 Vision 1212?

Grok 2 Vision 1212 can generate up to 33K tokens in a single response.

What is the knowledge cutoff of Grok 2 Vision 1212?

Grok 2 Vision 1212 has a knowledge cutoff date of Aug 2024. This means the model was trained on data available up to that date.

What types of input can Grok 2 Vision 1212 process?

Grok 2 Vision 1212 accepts the following input types: text, image. It produces: text.

Does Grok 2 Vision 1212 support tool use (function calling)?

Yes, Grok 2 Vision 1212 supports tool use (function calling), allowing it to interact with external tools, APIs, and data sources as part of its response flow.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Grok 2 Vision 1212 API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Grok 2 Vision 1212 to your app without worrying about API keys or setup.

Read the Docs View Tutorials