Liquid AI

Liquid AI API

Access Liquid AI instantly with Puter.js, and add AI to any app in a few lines of code without backend or API keys.

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain AI like I'm five!", {
    model: "liquid/lfm-2.2-6b"
}).then(response => {
    console.log(response);
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain AI like I'm five!", {
            model: "liquid/lfm-2.2-6b"
        }).then(response => {
            console.log(response);
        });
    </script>
</body>
</html>

List of Liquid AI Models

Chat

LiquidAI: LFM2.5-1.2B-Instruct

liquid/lfm-2.5-1.2b-instruct:free

Liquid LFM 2.5 1.2B Instruct is a compact 1.2B parameter model from Liquid AI optimized for on-device and edge deployment. It excels at instruction following, agentic tasks, data extraction, and RAG with extremely fast CPU inference and low memory usage. Best suited for mobile, IoT, and embedded systems rather than knowledge-intensive tasks or programming.

Chat

LiquidAI: LFM2.5-1.2B-Thinking

liquid/lfm-2.5-1.2b-thinking:free

Liquid LFM 2.5 1.2B Thinking is a reasoning-enhanced variant of Liquid AI's edge-optimized model that uses chain-of-thought reasoning while requiring fewer output tokens than comparable thinking models. It's designed for on-device deployment with fast CPU inference, ideal for agentic tasks, data extraction, and RAG. Not recommended for knowledge-intensive tasks or programming.

Chat

LiquidAI: LFM2-8B-A1B

liquid/lfm2-8b-a1b

Liquid LFM2-8B-A1B is Liquid AI's first on-device Mixture-of-Experts model with 8.3B total parameters but only 1.5B active per token, delivering 3-4B dense model quality at 1.5B-class compute. It runs faster than Qwen3-1.7B on mobile CPUs and is designed for private, low-latency applications on phones, tablets, and laptops.

Chat

LiquidAI: LFM2-2.6B

liquid/lfm-2.2-6b

Liquid LFM2-2.6B is a 2.6 billion parameter hybrid language model from Liquid AI that combines grouped query attention with short convolutional layers for fast, efficient inference. It's optimized for on-device deployment on phones, laptops, and edge devices with strong multilingual support across 10 languages including English, Japanese, and Chinese.

Frequently Asked Questions

What is this Liquid AI API about?

The Liquid AI API gives you access to models for AI chat. Through Puter.js, you can start using Liquid AI models instantly with zero setup or configuration.

Which Liquid AI models can I use?

Puter.js supports a variety of Liquid AI models, including LiquidAI: LFM2.5-1.2B-Instruct, LiquidAI: LFM2.5-1.2B-Thinking, LiquidAI: LFM2-8B-A1B, and more. Find all AI models supported by Puter.js in the AI model list.

How much does it cost?

With the User-Pays model, users cover their own AI costs through their Puter account. This means you can build apps without worrying about infrastructure expenses.

What is Puter.js?

Puter.js is a JavaScript library that provides access to AI, storage, and other cloud services directly from a single API. It handles authentication, infrastructure, and scaling so you can focus on building your app.

Does this work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Liquid AI API through Puter.js works with any JavaScript framework, Node.js, or plain HTML. Just include the library and start building. See the documentation for more details.