Meta Llama: Llama 3.2 3B Instruct
Access Llama 3.2 3B Instruct from Meta Llama using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "meta-llama/llama-3.2-3b-instruct"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "meta-llama/llama-3.2-3b-instruct"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="meta-llama/llama-3.2-3b-instruct",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "meta-llama/llama-3.2-3b-instruct",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Llama 3.2 3B Instruct is a compact 3 billion parameter model optimized for on-device use cases with 128K context support. It outperforms comparable models on instruction following, summarization, and tool-use tasks.
Context Window 80K
tokens
Max Output 16K
tokens
Input Cost $0.05
per million tokens
Output Cost $0.34
per million tokens
Release Date Sep 25, 2024
Output Speed 53
tokens / sec
Latency 0.58s
time to first token
Model Playground
Try Llama 3.2 3B Instruct instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
Benchmarks
How Llama 3.2 3B Instruct performs on standard evaluations.
| Benchmark | Score |
|---|---|
| GPQA Diamond Graduate-level science Q&A | 25.5% |
| Humanity's Last Exam Cross-domain reasoning | 5.2% |
| LiveCodeBench Recent coding problems | 8.3% |
| SciCode Scientific programming | 5.2% |
| MATH-500 Competition math | 48.9% |
| AIME 2024 Advanced math exam | 6.7% |
| AIME 2025 Advanced math exam | 3.3% |
| IFBench Instruction following | 26.2% |
| LCR Long-context reasoning | 2.0% |
| τ²-Bench Tool use / agents | 21.1% |
Scores sourced from Artificial Analysis.
Find other Meta Llama models →
Llama Guard 4 12B
Llama Guard 4 12B is Meta's 12 billion parameter multimodal safety model that moderates both text and image inputs across 12 languages. It was built from Llama 4 Scout and detects violations based on the MLCommons hazard taxonomy.
ChatLlama 4 Maverick
Llama 4 Maverick is Meta's 400 billion total parameter MoE model with 17B active parameters and 128 experts, supporting 1M token context. It's natively multimodal with state-of-the-art performance on coding, reasoning, and image understanding tasks.
ChatLlama 4 Scout
Llama 4 Scout is Meta's efficient 109 billion parameter MoE model with 17B active parameters and 16 experts, featuring an industry-leading 10M token context window. It fits on a single H100 GPU and handles multimodal text and image inputs.
Frequently Asked Questions
You can access Llama 3.2 3B Instruct by Meta Llama through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Llama 3.2 3B Instruct to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.05 |
| Output | $0.34 |
Llama 3.2 3B Instruct was created by Meta Llama and released on Sep 25, 2024.
Llama 3.2 3B Instruct supports a context window of 80K tokens. For reference, that is roughly equivalent to 160 pages of text.
Llama 3.2 3B Instruct can generate up to 16K tokens in a single response.
Yes — the Llama 3.2 3B Instruct API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Llama 3.2 3B Instruct to your app without worrying about API keys or setup.
Read the Docs View Tutorials