Cognitive Computations: Dolphin Mistral 24B Venice Edition (Uncensored)

Access Dolphin Mistral 24B Venice Edition (Uncensored) from Cognitive Computations using Puter.js AI API.

Get Started

Model Card

Dolphin Mistral 24B Venice Edition is an uncensored, general-purpose language model fine-tuned from Mistral Small 24B (Instruct-2501), developed by Cognitive Computations (the Dolphin project, founded by Eric Hartford) in collaboration with Venice.ai. It features a 32K context window and 24 billion parameters.

The model is specifically designed to remove default safety filters and content refusals, giving developers full control over system prompts, alignment, and model behavior. On Venice's censorship benchmark suite, it achieved a refusal rate of just 2.2%, the lowest among tested models.

While the base Mistral Small 24B leaned STEM-heavy, this fine-tune adds strong creative writing and storytelling capabilities with consistent character and narrative memory across long interactions. It also features improved tone control — neutral and polite by default, but fully steerable via prompting.

Best suited for developers building applications that require maximum output flexibility, custom ethical frameworks, or unrestricted content generation where typical model refusals would be a blocker.

Context Window 33K

tokens

Max Output N/A

tokens

Input Cost $0

per million tokens

Output Cost $0

per million tokens

Release Date May 8, 2025

 

API Usage Example

Add Dolphin Mistral 24B Venice Edition (Uncensored) to your app with just a few lines of code.
No backend, no configuration required.

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain quantum computing in simple terms", {
    model: "cognitivecomputations/dolphin-mistral-24b-venice-edition:free"
}).then(response => {
    document.body.innerHTML = response.message.content;
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain quantum computing in simple terms", {
            model: "cognitivecomputations/dolphin-mistral-24b-venice-edition:free"
        }).then(response => {
            document.body.innerHTML = response.message.content;
        });
    </script>
</body>
</html>
# pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.puter.com/puterai/openai/v1/",
    api_key="YOUR_PUTER_AUTH_TOKEN",
)

response = client.chat.completions.create(
    model="cognitivecomputations/dolphin-mistral-24b-venice-edition:free",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
)

print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
  -d '{
    "model": "cognitivecomputations/dolphin-mistral-24b-venice-edition:free",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
  }'

View full documentation →

Frequently Asked Questions

How do I use Dolphin Mistral 24B Venice Edition (Uncensored)?

You can access Dolphin Mistral 24B Venice Edition (Uncensored) by Cognitive Computations through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.

Is Dolphin Mistral 24B Venice Edition (Uncensored) free?

Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Dolphin Mistral 24B Venice Edition (Uncensored) to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

What is the pricing for Dolphin Mistral 24B Venice Edition (Uncensored)?
Pricing for Dolphin Mistral 24B Venice Edition (Uncensored) is based on the number of input and output tokens used per request.
Price per 1M tokens
Input$0
Output$0
Who created Dolphin Mistral 24B Venice Edition (Uncensored)?

Dolphin Mistral 24B Venice Edition (Uncensored) was created by Cognitive Computations and released on May 8, 2025.

What is the context window of Dolphin Mistral 24B Venice Edition (Uncensored)?

Dolphin Mistral 24B Venice Edition (Uncensored) supports a context window of 33K tokens. For reference, that is roughly equivalent to 66 pages of text.

Does it work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Dolphin Mistral 24B Venice Edition (Uncensored) API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.

Get started with Puter.js

Add Dolphin Mistral 24B Venice Edition (Uncensored) to your app without worrying about API keys or setup.

Read the Docs View Tutorials