Arcee AI: Spotlight
arcee-ai/spotlight
Access Spotlight from Arcee AI using Puter.js AI API.
Get Started// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "arcee-ai/spotlight"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "arcee-ai/spotlight"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="arcee-ai/spotlight",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "arcee-ai/spotlight",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Model Card
Arcee Spotlight is a 7-billion-parameter vision-language model from Arcee AI, derived from Qwen2.5-VL and fine-tuned for tight image-text grounding tasks including visual question answering, image captioning, and diagram analysis.
At 7B parameters it is designed for fast inference, making it practical for real-time or high-volume multimodal API workloads where latency and cost are constraints. Early benchmarks show it matching or outscoring larger VLMs such as LLaVA-1.6 13B on VQA and POPE alignment tests.
A strong choice for developers who need capable vision-language understanding without the cost overhead of larger multimodal models — well suited for document parsing, visual QA pipelines, and image-grounded chat.
Context Window 131K
tokens
Max Output 66K
tokens
Input Cost $0.18
per million tokens
Output Cost $0.18
per million tokens
Release Date Apr 1, 2025
Model Playground
Try Spotlight instantly in your browser.
This playground uses the Puter.js AI API — no API keys or setup required.
More AI Models From Arcee AI
Trinity Large Thinking
Trinity Large Thinking is a 398-billion-parameter sparse Mixture-of-Experts reasoning model from Arcee AI, with approximately 13B active parameters per token, post-trained with extended chain-of-thought and agentic reinforcement learning. It generates explicit reasoning traces in thinking blocks before final responses, and its 262K context window accommodates long agentic reasoning chains. Benchmark results include 94.7% on τ²-Bench and 98.2% on LiveCodeBench, placing it at #2 on PinchBench behind only Claude Opus 4.6. Released under Apache 2.0, Trinity Large Thinking is the strongest option in the Trinity family for agentic pipelines, long-horizon planning, complex multi-step coding, and tasks that benefit from transparent reasoning traces.
ChatTrinity Large Preview
Trinity Large Preview is a 400-billion-parameter sparse Mixture-of-Experts model from Arcee AI, with approximately 13B active parameters per token. It uses 256 experts with 4 active per token, trained on over 17 trillion tokens. On MMLU it scores 87.2, and it achieved 24.0 on AIME 2025, demonstrating strong mathematical reasoning alongside general knowledge. The 128k context window supports long-document analysis and complex reasoning workflows. Trinity Large Preview is suited for complex reasoning, math, and coding-adjacent workflows where developers want near-frontier quality through an API at substantially lower cost than dense models of equivalent scale.
ChatTrinity Mini
Trinity Mini is a 26-billion-parameter sparse Mixture-of-Experts model from Arcee AI, with approximately 3B active parameters per token. It uses 128 experts with 8 active per token, blending global sparsity with gated attention techniques. Specifically tuned for multi-turn agent workflows, tool orchestration, function calling, and structured outputs, it scores 84.95 on MMLU and 59.67 on BFCL V3, with throughput exceeding 200 tokens per second. Released under Apache 2.0, the 128k context window and strong function-calling performance make Trinity Mini a practical choice for agentic systems, backend automation, and tool-use pipelines where inference speed and cost efficiency matter.
Frequently Asked Questions
You can access Spotlight by Arcee AI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add Spotlight to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.18 |
| Output | $0.18 |
Spotlight was created by Arcee AI and released on Apr 1, 2025.
Spotlight supports a context window of 131K tokens. For reference, that is roughly equivalent to 262 pages of text.
Spotlight can generate up to 66K tokens in a single response.
Yes — the Spotlight API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add Spotlight to your app without worrying about API keys or setup.
Read the Docs View Tutorials