Google: Gemma 3n 2B

Access Gemma 3n 2B from Google using Puter.js AI API.

Get Started

Model Card

Gemma 3n E2B Instruct (Free) is Google's mobile-first open model with an effective 2B parameter memory footprint using Per-Layer Embeddings. It's optimized for on-device AI with audio, text, image, and video understanding.

Context Window 8K

tokens

Max Output 2K

tokens

Input Cost $0

per million tokens

Output Cost $0

per million tokens

Release Date Jun 25, 2025

 

API Usage Example

Add Gemma 3n 2B to your app with just a few lines of code.
No backend, no configuration required.

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "google/gemma-3n-e2b-it:free"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "google/gemma-3n-e2b-it:free"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="google/gemma-3n-e2b-it:free",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "google/gemma-3n-e2b-it:free",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

View full documentation →

More AI Models From Google

Chat

Gemma 4 26B A4B

Gemma 4 26B A4B is a Mixture-of-Experts (MoE) open model from Google DeepMind, built from the same research as Gemini 3. It has 26B total parameters but activates only 3.8B per forward pass, delivering near-31B-dense quality at a fraction of the compute cost. The model supports a 256K token context window, multimodal image and text input, built-in step-by-step reasoning (thinking mode), and native function calling for agentic workflows. It currently ranks #6 among open models on the Arena AI text leaderboard with an estimated LMArena score of 1441 — competitive with models many times its active size. It excels at reasoning, coding, long-context tasks, and structured tool use. It's a strong pick for developers who need high throughput and low latency without sacrificing capability.

Chat

Gemma 4 31B

Gemma 4 31B is a dense multimodal model from Google DeepMind, built on the same research foundation as Gemini 3. It is the most capable model in the Gemma 4 family, accepting text, image, and video input with a 256K-token context window. It delivers strong benchmark results: 89.2% on AIME 2026, 85.2% on MMLU Pro, 80.0% on LiveCodeBench v6, and 84.3% on GPQA Diamond. On the Arena AI text leaderboard, it ranks as the #3 open model globally, outperforming many models with far higher parameter counts. Gemma 4 31B features native function calling trained into the model, configurable chain-of-thought reasoning, and structured JSON output — making it especially well-suited for agentic workflows, coding tasks, and multi-turn tool use. It supports over 140 languages and serves as a strong foundation for fine-tuning.

Chat

Gemini 3.1 Flash Lite Preview

Gemini 3.1 Flash Lite is Google's fastest and most cost-efficient model in the Gemini 3 series, optimized for high-volume, latency-sensitive tasks like translation, classification, and content moderation. Priced at $0.25/1M input tokens and $1.50/1M output tokens, it outperforms Gemini 2.5 Flash with 2.5x faster time-to-first-token and a 45% boost in output speed.

View all Google models →

Frequently Asked Questions

How do I use Gemma 3n 2B?

You can access Gemma 3n 2B by Google through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is Gemma 3n 2B free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Gemma 3n 2B to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for Gemma 3n 2B?
Pricing for Gemma 3n 2B is based on the number of input and output tokens used per request.
Price per 1M tokens
Input$0
Output$0
Who created Gemma 3n 2B?

Gemma 3n 2B was created by Google and released on Jun 25, 2025.

What is the context window of Gemma 3n 2B?

Gemma 3n 2B supports a context window of 8K tokens. For reference, that is roughly equivalent to 16 pages of text.

What is the max output length of Gemma 3n 2B?

Gemma 3n 2B can generate up to 2K tokens in a single response.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Gemma 3n 2B API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Gemma 3n 2B to your app without worrying about API keys or setup.

Read the Docs View Tutorials