Google

Google: Gemma 3 12B

Access Gemma 3 12B from Google using Puter.js AI API.

Get Started
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "google/gemma-3-12b-it"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "google/gemma-3-12b-it"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="google/gemma-3-12b-it",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "google/gemma-3-12b-it",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

Model Card

Gemma 3 12B Instruct is Google's mid-sized open multimodal model supporting text and image input with a 128K token context window. It supports 140+ languages and offers strong performance for single-GPU deployment.

Context Window 131K

tokens

Max Output 131K

tokens

Input Cost $0.04

per million tokens

Output Cost $0.13

per million tokens

Release Date Mar 12, 2025

 

Output Speed 30

tokens / sec

Latency 23.49s

time to first token

Model Playground

Try Gemma 3 12B instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.

Chat google/gemma-3-12b-it
Google
Chat with Gemma 3 12B
Powered by Puter.js

Benchmarks

How Gemma 3 12B performs on standard evaluations.

Artificial Analysis
Intelligence Index
8.8
Better than 11% of tracked models
Artificial Analysis
Coding Index
6.3
Better than 13% of tracked models
Artificial Analysis
Math Index
18.3
Better than 20% of tracked models
BenchmarkScore
GPQA Diamond Graduate-level science Q&A
34.9%
Humanity's Last Exam Cross-domain reasoning
4.8%
LiveCodeBench Recent coding problems
13.7%
SciCode Scientific programming
17.4%
MATH-500 Competition math
85.3%
AIME 2024 Advanced math exam
22.0%
AIME 2025 Advanced math exam
18.3%
IFBench Instruction following
36.7%
LCR Long-context reasoning
6.7%
Terminal-Bench Hard Agentic terminal tasks
0.8%
τ²-Bench Tool use / agents
10.8%

Scores sourced from Artificial Analysis.

Find other Google models

Chat

Gemma 4 26B A4B

Gemma 4 26B A4B is a Mixture-of-Experts (MoE) open model from Google DeepMind, built from the same research as Gemini 3. It has 26B total parameters but activates only 3.8B per forward pass, delivering near-31B-dense quality at a fraction of the compute cost. The model supports a 256K token context window, multimodal image and text input, built-in step-by-step reasoning (thinking mode), and native function calling for agentic workflows. It currently ranks #6 among open models on the Arena AI text leaderboard with an estimated LMArena score of 1441 — competitive with models many times its active size. It excels at reasoning, coding, long-context tasks, and structured tool use. It's a strong pick for developers who need high throughput and low latency without sacrificing capability.

Chat

Gemma 4 31B

Gemma 4 31B is a dense multimodal model from Google DeepMind, built on the same research foundation as Gemini 3. It is the most capable model in the Gemma 4 family, accepting text, image, and video input with a 256K-token context window. It delivers strong benchmark results: 89.2% on AIME 2026, 85.2% on MMLU Pro, 80.0% on LiveCodeBench v6, and 84.3% on GPQA Diamond. On the Arena AI text leaderboard, it ranks as the #3 open model globally, outperforming many models with far higher parameter counts. Gemma 4 31B features native function calling trained into the model, configurable chain-of-thought reasoning, and structured JSON output — making it especially well-suited for agentic workflows, coding tasks, and multi-turn tool use. It supports over 140 languages and serves as a strong foundation for fine-tuning.

Video

Veo 3.1 Lite

Veo 3.1 Lite is Google DeepMind's most cost-effective video generation model, built for high-volume applications where per-clip cost is a primary concern. It generates video at the same speed as Veo 3.1 Fast but at less than half the price — starting at $0.05 per second for 720p. The model supports text-to-video and image-to-video with 720p and 1080p output in landscape (16:9) or portrait (9:16), at configurable durations of 4, 6, or 8 seconds. It does not support 4K output, scene extension, or native audio generation — clips are silent by default. Veo 3.1 Lite is ideal for developers building batch video pipelines, social media automation, or interactive tools where cost per generation matters most and audio can be added in post-production.

Frequently Asked Questions

How do I use Gemma 3 12B?

You can access Gemma 3 12B by Google through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is Gemma 3 12B free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Gemma 3 12B to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for Gemma 3 12B?
Pricing for Gemma 3 12B is based on the number of input and output tokens used per request.
Price per 1M tokens
Input$0.04
Output$0.13
Who created Gemma 3 12B?

Gemma 3 12B was created by Google and released on Mar 12, 2025.

What is the context window of Gemma 3 12B?

Gemma 3 12B supports a context window of 131K tokens. For reference, that is roughly equivalent to 262 pages of text.

What is the max output length of Gemma 3 12B?

Gemma 3 12B can generate up to 131K tokens in a single response.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Gemma 3 12B API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Gemma 3 12B to your app without worrying about API keys or setup.

Read the Docs View Tutorials