Meta: Llama 3.2 11B Vision Instruct API

Access Meta: Llama 3.2 11B Vision Instruct from Meta Llama using Puter.js AI API.

Get Started

Model Card

Llama 3.2 11B Vision Instruct is Meta's multimodal model that processes both text and images with 11 billion parameters. It excels at visual recognition, image reasoning, captioning, and answering questions about images.

Context Window N/A

tokens

Max Output 16K

tokens

Input Cost $0.05

per million tokens

Output Cost $0.05

per million tokens

Release Date Sep 25, 2024

 

API Usage Example

Add Meta: Llama 3.2 11B Vision Instruct to your app with just a few lines of code.
No API keys, no backend, no configuration required.

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "meta-llama/llama-3.2-11b-vision-instruct"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "meta-llama/llama-3.2-11b-vision-instruct"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>

View full documentation →

Frequently Asked Questions

What is this Meta: Llama 3.2 11B Vision Instruct API about?

The Meta: Llama 3.2 11B Vision Instruct API gives you access to Meta Llama's chat model through Puter.js. With just a few lines of JavaScript, you can integrate Meta: Llama 3.2 11B Vision Instruct into any web app or Node.js project — no API keys, no backend, and no configuration required.

Who created Meta: Llama 3.2 11B Vision Instruct?

Meta: Llama 3.2 11B Vision Instruct was created by Meta Llama and released on Sep 25, 2024.

What is the max output length of Meta: Llama 3.2 11B Vision Instruct?

Meta: Llama 3.2 11B Vision Instruct can generate up to 16K tokens in a single response.

How much does it cost?
The Meta: Llama 3.2 11B Vision Instruct API is available through the User-Pays Model. As a developer, you can add the Meta: Llama 3.2 11B Vision Instruct API to your app for free — your users pay for their own AI costs directly.
Price per 1M tokens
Input$0.05
Output$0.05
How do I access the Meta: Llama 3.2 11B Vision Instruct API?

You can access the Meta: Llama 3.2 11B Vision Instruct API with just a few lines of JavaScript — no API keys, no backend, and no configuration required. Include the Puter.js library in your project and start making calls right away. For more details, check out our documentation.

Does the Meta: Llama 3.2 11B Vision Instruct API work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Meta: Llama 3.2 11B Vision Instruct API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Meta: Llama 3.2 11B Vision Instruct to your app without worrying about API keys or setup.

Read the Docs View Tutorials