Blog
Self-hosted Puter Now Supports Ollama Models
We have some great news! Self-hosted Puter can now automatically detect and use Ollama models running on your local machine.
Puter.com currently supports more than 500 models from OpenAI, Anthropic, Google, DeepSeek, Qwen, Meta, and more, right out of the box with no additional configuration; however, traditionally, self-hosted Puter required you to manually configure each model you wanted to use. This was a hassle, and it was easy to miss a model or make a mistake.
With the Puter-Ollama integration, you can now use any model supported by Ollama with self-hosted Puter. No more manually configuring models; from now on, when you start Puter locally, it will automatically detect Ollama on your system and allow you to use it without any additional configuration. All you have to do is set the model parameter to the model you want in the format of ollama:<model-name>, where <model-name> is the name of the model as published on Ollama.
Here's an example of how to use the gpt-oss:20b model:
puter.ai.chat("What is the capital of France?", { model: "ollama:gpt-oss:20b" });
Just like the cloud version of Puter, one line of code is all you need to start using the model!
That's it! You can now use any Ollama model with self-hosted Puter without any additional configuration. This will not only make it easier to use Puter's AI capabilities locally, but it will also help you get started with local AI development without having to worry about API keys or usage limits.
Free, Serverless AI and Cloud
Start creating powerful web applications with Puter.js in seconds!
Get Started Now