// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "google/gemma-4-26b-a4b-it"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "google/gemma-4-26b-a4b-it"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="google/gemma-4-26b-a4b-it",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "google/gemma-4-26b-a4b-it",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Gemma 4 26B A4B is a Mixture-of-Experts (MoE) open model from Google DeepMind, built from the same research as Gemini 3. It has 26B total parameters but activates only 3.8B per forward pass, delivering near-31B-dense quality at a fraction of the compute cost.
The model supports a 256K token context window, multimodal image and text input, built-in step-by-step reasoning (thinking mode), and native function calling for agentic workflows. It currently ranks #6 among open models on the Arena AI text leaderboard with an estimated LMArena score of 1441 — competitive with models many times its active size.
It excels at reasoning, coding, long-context tasks, and structured tool use. It's a strong pick for developers who need high throughput and low latency without sacrificing capability.
Context Window 262K
tokens
Max Output 262K
tokens
Input Cost $0.08
per million tokens
Output Cost $0.35
per million tokens
Release Date Apr 3, 2026
Model Playground
Try Gemma 4 26B A4B instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
More AI Models From Google
Gemma 4 31B
Gemma 4 31B is a dense multimodal model from Google DeepMind, built on the same research foundation as Gemini 3. It is the most capable model in the Gemma 4 family, accepting text, image, and video input with a 256K-token context window. It delivers strong benchmark results: 89.2% on AIME 2026, 85.2% on MMLU Pro, 80.0% on LiveCodeBench v6, and 84.3% on GPQA Diamond. On the Arena AI text leaderboard, it ranks as the #3 open model globally, outperforming many models with far higher parameter counts. Gemma 4 31B features native function calling trained into the model, configurable chain-of-thought reasoning, and structured JSON output — making it especially well-suited for agentic workflows, coding tasks, and multi-turn tool use. It supports over 140 languages and serves as a strong foundation for fine-tuning.
VideoVeo 3.1 Lite
Veo 3.1 Lite is Google DeepMind's most cost-effective video generation model, built for high-volume applications where per-clip cost is a primary concern. It generates video at the same speed as Veo 3.1 Fast but at less than half the price — starting at $0.05 per second for 720p. The model supports text-to-video and image-to-video with 720p and 1080p output in landscape (16:9) or portrait (9:16), at configurable durations of 4, 6, or 8 seconds. It does not support 4K output, scene extension, or native audio generation — clips are silent by default. Veo 3.1 Lite is ideal for developers building batch video pipelines, social media automation, or interactive tools where cost per generation matters most and audio can be added in post-production.
ChatGemini 3.1 Flash Lite Preview
Gemini 3.1 Flash Lite is Google's fastest and most cost-efficient model in the Gemini 3 series, optimized for high-volume, latency-sensitive tasks like translation, classification, and content moderation. Priced at $0.25/1M input tokens and $1.50/1M output tokens, it outperforms Gemini 2.5 Flash with 2.5x faster time-to-first-token and a 45% boost in output speed.
Frequently Asked Questions
You can access Gemma 4 26B A4B by Google through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Gemma 4 26B A4B to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.08 |
| Output | $0.35 |
Gemma 4 26B A4B was created by Google and released on Apr 3, 2026.
Gemma 4 26B A4B supports a context window of 262K tokens. For reference, that is roughly equivalent to 524 pages of text.
Gemma 4 26B A4B can generate up to 262K tokens in a single response.
Yes — the Gemma 4 26B A4B API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Gemma 4 26B A4B to your app without worrying about API keys or setup.
Read the Docs View Tutorials