Llama 3.1 Nemotron 70B Instruct API

Access Llama 3.1 Nemotron 70B Instruct from NVIDIA using Puter.js AI API.

Get Started

Model Card

Llama 3.1 Nemotron 70B Instruct is a 70B parameter LLM customized by NVIDIA using RLHF to improve response helpfulness, achieving top rankings on alignment benchmarks like Arena Hard and AlpacaEval 2 LC. It supports a 128K token context and is optimized for conversational AI and instruction-following tasks.

Context Window N/A

tokens

Max Output 16K

tokens

Input Cost $1.2

per million tokens

Output Cost $1.2

per million tokens

Release Date Oct 1, 2024

 

API Usage Example

Add Llama 3.1 Nemotron 70B Instruct to your app with just a few lines of code.
No API keys, no backend, no configuration required.

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "nvidia/llama-3.1-nemotron-70b-instruct"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "nvidia/llama-3.1-nemotron-70b-instruct"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>

View full documentation →

Frequently Asked Questions

What is this Llama 3.1 Nemotron 70B Instruct API about?

The Llama 3.1 Nemotron 70B Instruct API gives you access to NVIDIA's chat model through Puter.js. With just a few lines of JavaScript, you can integrate Llama 3.1 Nemotron 70B Instruct into any web app or Node.js project — no API keys, no backend, and no configuration required.

Who created Llama 3.1 Nemotron 70B Instruct?

Llama 3.1 Nemotron 70B Instruct was created by NVIDIA and released on Oct 1, 2024.

What is the max output length of Llama 3.1 Nemotron 70B Instruct?

Llama 3.1 Nemotron 70B Instruct can generate up to 16K tokens in a single response.

How much does it cost?
The Llama 3.1 Nemotron 70B Instruct API is available through the User-Pays Model. As a developer, you can add the Llama 3.1 Nemotron 70B Instruct API to your app for free — your users pay for their own AI costs directly.
Price per 1M tokens
Input$1.2
Output$1.2
How do I access the Llama 3.1 Nemotron 70B Instruct API?

You can access the Llama 3.1 Nemotron 70B Instruct API with just a few lines of JavaScript — no API keys, no backend, and no configuration required. Include the Puter.js library in your project and start making calls right away. For more details, check out our documentation.

Does the Llama 3.1 Nemotron 70B Instruct API work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Llama 3.1 Nemotron 70B Instruct API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Llama 3.1 Nemotron 70B Instruct to your app without worrying about API keys or setup.

Read the Docs View Tutorials