NVIDIA Nemotron API
Access NVIDIA Nemotron instantly with Puter.js, and add AI to any app in a few lines of code without backend or API keys.
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain AI like I'm five!", {
model: "nvidia/llama-3.1-nemotron-ultra-253b-v1"
}).then(response => {
console.log(response);
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain AI like I'm five!", {
model: "nvidia/llama-3.1-nemotron-ultra-253b-v1"
}).then(response => {
console.log(response);
});
</script>
</body>
</html>
List of NVIDIA Nemotron Models
NVIDIA: Nemotron 3 Nano 30B A3B
nvidia/nemotron-3-nano-30b-a3b
Nemotron 3 Nano 30B A3B is a 31.6B total parameter (3.2B active) hybrid Mamba-Transformer MoE model trained from scratch by NVIDIA with a 1M token context window. It offers up to 3.3x higher throughput than comparable models and supports configurable reasoning traces for both agentic and conversational tasks.
ChatNVIDIA: Nemotron Nano 12B 2 VL
nvidia/nemotron-nano-12b-v2-vl
Nemotron Nano 12B V2 VL is a 12.6B parameter multimodal vision-language model built on a hybrid Mamba-Transformer architecture for document intelligence and video understanding. It processes multiple images, documents, and videos while achieving leading results on OCRBench v2 with up to 2.5x higher throughput using Efficient Video Sampling.
ChatNVIDIA: Nemotron Nano 9B V2
nvidia/nemotron-nano-9b-v2
Nemotron Nano 9B V2 is a 9B parameter hybrid Mamba-Transformer model trained from scratch by NVIDIA with a 128K context window, achieving up to 6x higher inference throughput than similar models like Qwen3-8B. It features controllable reasoning budget allowing developers to balance accuracy and response time for edge deployment.
ChatNVIDIA: Llama 3.3 Nemotron Super 49B V1.5
nvidia/llama-3.3-nemotron-super-49b-v1.5
Llama 3.3 Nemotron Super 49B v1.5 is an upgraded 49B parameter reasoning model derived from Llama 3.3 70B Instruct, optimized for single-GPU deployment on H100/H200 through Neural Architecture Search. It supports 128K context and is post-trained for agentic workflows including RAG, tool calling, and multi-turn conversations.
ChatNVIDIA: Llama 3.1 Nemotron Ultra 253B v1
nvidia/llama-3.1-nemotron-ultra-253b-v1
Llama 3.1 Nemotron Ultra 253B is a 253B parameter reasoning model derived from Llama 3.1 405B using Neural Architecture Search for improved efficiency, supporting 128K context and toggle ON/OFF reasoning modes. It excels at complex math, scientific reasoning, coding, RAG, and tool calling tasks while fitting on a single 8xH100 node.
ChatNVIDIA: Llama 3.1 Nemotron 70B Instruct
nvidia/llama-3.1-nemotron-70b-instruct
Llama 3.1 Nemotron 70B Instruct is a 70B parameter LLM customized by NVIDIA using RLHF to improve response helpfulness, achieving top rankings on alignment benchmarks like Arena Hard and AlpacaEval 2 LC. It supports a 128K token context and is optimized for conversational AI and instruction-following tasks.
Frequently Asked Questions
The NVIDIA Nemotron API gives you access to models for AI chat. Through Puter.js, you can start using NVIDIA Nemotron models instantly with zero setup or configuration.
Puter.js supports a variety of NVIDIA Nemotron models, including NVIDIA: Nemotron 3 Nano 30B A3B, NVIDIA: Nemotron Nano 12B 2 VL, NVIDIA: Nemotron Nano 9B V2, and more. Find all AI models supported by Puter.js in the AI model list.
With the User-Pays model, users cover their own AI costs through their Puter account. This means you can build apps without worrying about infrastructure expenses.
Puter.js is a JavaScript library that provides access to AI, storage, and other cloud services directly from a single API. It handles authentication, infrastructure, and scaling so you can focus on building your app.
Yes — the NVIDIA Nemotron API through Puter.js works with any JavaScript framework, Node.js, or plain HTML. Just include the library and start building. See the documentation for more details.