NVIDIA: Llama 3.1 Nemotron Ultra 253B v1
This model is no longer available.Add AI to your application with Puter.js.
Explore Other ModelsModel Card
Llama 3.1 Nemotron Ultra 253B is a 253B parameter reasoning model derived from Llama 3.1 405B using Neural Architecture Search for improved efficiency, supporting 128K context and toggle ON/OFF reasoning modes. It excels at complex math, scientific reasoning, coding, RAG, and tool calling tasks while fitting on a single 8xH100 node.
Context Window 131K
tokens
Max Output N/A
tokens
Input Cost $0.6
per million tokens
Output Cost $1.8
per million tokens
Release Date Apr 7, 2025
Code Example
Add AI to your app with the Puter.js AI API — no API keys or setup required.
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms").then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms").then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
More AI Models From NVIDIA
Nemotron 3 Super
Nemotron 3 Super is NVIDIA's open-weight 120B-parameter hybrid Mamba-Transformer MoE model with only 12B active parameters, designed for running complex multi-agent agentic AI systems at scale. It features a 1-million-token context window to prevent goal drift across long tasks and delivers up to 5x higher throughput than its predecessor. The model excels at reasoning, coding, and tool use.
ChatNemotron 3 Nano 30B A3B
Nemotron 3 Nano 30B A3B is a 31.6B total parameter (3.2B active) hybrid Mamba-Transformer MoE model trained from scratch by NVIDIA with a 1M token context window. It offers up to 3.3x higher throughput than comparable models and supports configurable reasoning traces for both agentic and conversational tasks.
ChatNemotron Nano 12B 2 VL
Nemotron Nano 12B V2 VL is a 12.6B parameter multimodal vision-language model built on a hybrid Mamba-Transformer architecture for document intelligence and video understanding. It processes multiple images, documents, and videos while achieving leading results on OCRBench v2 with up to 2.5x higher throughput using Efficient Video Sampling.
Frequently Asked Questions
You can access Llama 3.1 Nemotron Ultra 253B v1 by NVIDIA through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Llama 3.1 Nemotron Ultra 253B v1 to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.6 |
| Output | $1.8 |
Llama 3.1 Nemotron Ultra 253B v1 was created by NVIDIA and released on Apr 7, 2025.
Llama 3.1 Nemotron Ultra 253B v1 supports a context window of 131K tokens. For reference, that is roughly equivalent to 262 pages of text.
Yes — the Llama 3.1 Nemotron Ultra 253B v1 API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add AI to your application without worrying about API keys or setup.
Explore Models View Tutorials