Inception: Mercury API

Access Inception: Mercury from Inception using Puter.js AI API.

Get Started

Model Card

Mercury is the world's first commercial diffusion large language model (dLLM) from Inception Labs that generates text 5-10x faster than traditional autoregressive LLMs by predicting multiple tokens in parallel. It's designed for latency-sensitive applications like voice agents, search interfaces, and chatbots while matching the quality of speed-optimized models like Claude 3.5 Haiku.

Context Window

N/A

tokens

Max Output

16,384

tokens

Input Cost

$0.25

per million tokens

Output Cost

$1

per million tokens

API Usage Example

Add Inception: Mercury to your app with just a few lines of code.
No API keys, no backend, no configuration required.

<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "inception/mercury"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>

View full documentation →

Get started with Puter.js

Add Inception: Mercury to your app without worrying about API keys or setup.

Read the Docs View Tutorials