DeepSeek

DeepSeek API

Access DeepSeek instantly with Puter.js, and add AI to any app in a few lines of code without backend or API keys.

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain AI like I'm five!", {
    model: "deepseek/deepseek-v3.2"
}).then(response => {
    console.log(response);
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain AI like I'm five!", {
            model: "deepseek/deepseek-v3.2"
        }).then(response => {
            console.log(response);
        });
    </script>
</body>
</html>

List of DeepSeek Models

Chat

DeepSeek: DeepSeek V3.2

deepseek/deepseek-v3.2

DeepSeek V3.2 is the December 2025 flagship model featuring DeepSeek Sparse Attention for efficiency and massive reinforcement learning post-training, achieving GPT-5-level performance. It's the first DeepSeek model to integrate thinking directly into tool-use and excels at agentic AI tasks.

Chat

DeepSeek: DeepSeek V3.2 Speciale

deepseek/deepseek-v3.2-speciale

DeepSeek V3.2-Speciale is a high-compute variant designed exclusively for maximum reasoning accuracy, achieving gold-medal performance in IMO 2025, IOI 2025, and ICPC World Finals. It rivals Gemini 3.0 Pro but requires higher token usage and doesn't support tool calling.

Chat

DeepSeek: DeepSeek V3.2 Exp

deepseek/deepseek-v3.2-exp

DeepSeek V3.2-Exp is the September 2025 experimental predecessor to V3.2, introducing DeepSeek Sparse Attention architecture through continued training on V3.1-Terminus. It served as a testing ground for the sparse attention innovations later refined in V3.2.

Chat

DeepSeek: DeepSeek V3.1 Terminus

deepseek/deepseek-v3.1-terminus

DeepSeek V3.1-Terminus is the September 2025 refined update to V3.1, addressing user-reported issues like language mixing and improving Code Agent and Search Agent capabilities. It represents the final, most stable version of the V3 architecture before V3.2.

Chat

DeepSeek: DeepSeek V3.1

deepseek/deepseek-chat-v3.1

DeepSeek V3.1 is an August 2025 hybrid model that combines the capabilities of V3 and R1, supporting both thinking and non-thinking modes via chat template switching. It features 671B parameters (37B activated), 128K context, and significantly improved tool-calling and agent capabilities.

Chat

DeepSeek: R1 0528

deepseek/deepseek-r1-0528

DeepSeek R1-0528 is the May 2025 major update to R1, featuring dramatically improved reasoning depth with nearly double the thinking tokens (23K vs 12K average) and approaching performance of O3 and Gemini 2.5 Pro. It adds function calling support, reduced hallucinations, and improved AIME accuracy from 70% to 87.5%.

Chat

DeepSeek: DeepSeek V3 0324

deepseek/deepseek-chat-v3-0324

DeepSeek V3-0324 is the March 2025 update to DeepSeek V3, incorporating reinforcement learning techniques from R1 to significantly improve reasoning, coding, and frontend development capabilities. It became the first open-source model to outperform all proprietary non-reasoning models on benchmarks, exceeding GPT-4.5 in math and coding tasks.

Chat

DeepSeek Reasoner

deepseek/deepseek-reasoner

DeepSeek Reasoner is the API alias for DeepSeek's reasoning models (R1 series), which use chain-of-thought reasoning to solve complex math, coding, and logic problems. It displays its thinking process before arriving at answers and achieves performance comparable to OpenAI o1.

Chat

DeepSeek: R1

deepseek/deepseek-r1

DeepSeek R1 is DeepSeek's first-generation reasoning model released January 2025, trained via large-scale reinforcement learning to achieve performance comparable to OpenAI o1 on math, code, and reasoning tasks. It pioneered open-source reasoning capabilities with self-verification and reflection behaviors.

Chat

DeepSeek: R1 Distill Llama 70B

deepseek/deepseek-r1-distill-llama-70b

DeepSeek R1 Distill Llama 70B is a 70 billion parameter dense model fine-tuned from Llama 3.3-70B-Instruct using 800K reasoning samples generated by DeepSeek R1. It brings R1's reasoning capabilities to a more accessible size while maintaining strong performance on math and coding benchmarks.

Chat

DeepSeek: R1 Distill Qwen 32B

deepseek/deepseek-r1-distill-qwen-32b

DeepSeek R1 Distill Qwen 32B is a 32 billion parameter dense model fine-tuned from Qwen 2.5 using R1-generated reasoning data, achieving state-of-the-art results for dense models. It outperforms OpenAI o1-mini on various benchmarks while being efficient enough for local deployment.

Chat

DeepSeek Chat

deepseek/deepseek-chat

DeepSeek Chat is the general-purpose conversational alias that points to the latest DeepSeek V3 chat model, a 671B parameter Mixture-of-Experts LLM optimized for everyday conversations, coding assistance, and general tasks. It supports 128K context and provides fast, direct responses without explicit reasoning chains.

Frequently Asked Questions

What is this DeepSeek API about?

The DeepSeek API gives you access to models for AI chat. Through Puter.js, you can start using DeepSeek models instantly with zero setup or configuration.

Which DeepSeek models can I use?

Puter.js supports a variety of DeepSeek models, including DeepSeek: DeepSeek V3.2, DeepSeek: DeepSeek V3.2 Speciale, DeepSeek: DeepSeek V3.2 Exp, and more. Find all AI models supported by Puter.js in the AI model list.

How much does it cost?

With the User-Pays model, users cover their own AI costs through their Puter account. This means you can build apps without worrying about infrastructure expenses.

What is Puter.js?

Puter.js is a JavaScript library that provides access to AI, storage, and other cloud services directly from a single API. It handles authentication, infrastructure, and scaling so you can focus on building your app.

Does this work with React / Vue / Vanilla JS / Node / etc.?

Yes — the DeepSeek API through Puter.js works with any JavaScript framework, Node.js, or plain HTML. Just include the library and start building. See the documentation for more details.