Arcee AI: Virtuoso Large API
Access Arcee AI: Virtuoso Large from Arcee AI using Puter.js AI API.
Get Startedarcee-ai/virtuoso-large
Model Card
Arcee Virtuoso Large is a 72B parameter general-purpose model based on Qwen 2.5-72B, trained using DistillKit and MergeKit with DeepSeek R1 distillation techniques. It retains a 128k context window for ingesting large documents, codebases, or financial filings, excelling at cross-domain reasoning, creative writing, and enterprise QA. The model serves as the fallback brain in Arcee Conductor pipelines when smaller SLMs flag low confidence.
Context Window
N/A
tokens
Max Output
64,000
tokens
Input Cost
$0.75
per million tokens
Output Cost
$1.2
per million tokens
API Usage Example
Add Arcee AI: Virtuoso Large to your app with just a few lines of code.
No API keys, no backend, no configuration required.
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "arcee-ai/virtuoso-large"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
Get started with Puter.js
Add Arcee AI: Virtuoso Large to your app without worrying about API keys or setup.
Read the Docs View Tutorials