Arcee AI: Trinity Large Thinking
Access Trinity Large Thinking from Arcee AI using Puter.js AI API.
Get Startedarcee-ai/trinity-large-thinking
Model Card
Trinity Large Thinking is a reasoning-optimized open-source model from Arcee AI, built on a 398B-parameter sparse Mixture-of-Experts architecture with approximately 13B active parameters per token.
It uses extended chain-of-thought reasoning via explicit thinking traces before generating responses. The model is purpose-built for agentic workloads — multi-turn tool calling, long-horizon planning, and stable behavior across extended agent loops.
On agentic benchmarks, it scores 94.7% on τ²-Bench and 91.9% on PinchBench, ranking #2 overall on PinchBench behind only Claude Opus 4.6 — at roughly 96% lower cost. It supports a 262K-token context window with up to 80K output tokens.
Released under Apache 2.0, it's a strong pick for developers running cost-sensitive agent pipelines that need reliable tool use and instruction following at frontier-level quality.
Context Window 262K
tokens
Max Output 80K
tokens
Input Cost $0.25
per million tokens
Output Cost $0.9
per million tokens
Release Date Apr 1, 2026
API Usage Example
Add Trinity Large Thinking to your app with just a few lines of code.
No backend, no configuration required.
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "arcee-ai/trinity-large-thinking"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "arcee-ai/trinity-large-thinking"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="arcee-ai/trinity-large-thinking",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "arcee-ai/trinity-large-thinking",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
More AI Models From Arcee AI
Trinity Large Preview
Trinity Large Preview is a 400B-parameter open-weight sparse Mixture-of-Experts model from Arcee AI with 13B active parameters per token, trained on 17+ trillion tokens. It excels at creative writing, multi-turn conversations, tool use, and agentic coding tasks with support for up to 128K context.
ChatTrinity Mini
Arcee Trinity Mini is a 26B parameter sparse mixture-of-experts (MoE) model with only 3B active parameters per token, trained end-to-end in the U.S. on 10T tokens. It features 128 experts with 8 active per token, a 128k context window, and is optimized for multi-turn reasoning, function calling, and agent workflows. Released under Apache 2.0, it offers strong performance at extremely cost-efficient pricing.
ChatVirtuoso Large
Arcee Virtuoso Large is a 72B parameter general-purpose model based on Qwen 2.5-72B, trained using DistillKit and MergeKit with DeepSeek R1 distillation techniques. It retains a 128k context window for ingesting large documents, codebases, or financial filings, excelling at cross-domain reasoning, creative writing, and enterprise QA. The model serves as the fallback brain in Arcee Conductor pipelines when smaller SLMs flag low confidence.
Frequently Asked Questions
You can access Trinity Large Thinking by Arcee AI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Trinity Large Thinking to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.25 |
| Output | $0.9 |
Trinity Large Thinking was created by Arcee AI and released on Apr 1, 2026.
Trinity Large Thinking supports a context window of 262K tokens. For reference, that is roughly equivalent to 524 pages of text.
Trinity Large Thinking can generate up to 80K tokens in a single response.
Yes — the Trinity Large Thinking API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Trinity Large Thinking to your app without worrying about API keys or setup.
Read the Docs View Tutorials