Liquid AI

Liquid AI API

Access Liquid AI instantly with Puter.js, and add AI to any app in a few lines of code without backend or API keys.

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

puter.ai.chat("Explain AI like I'm five!", {
    model: "liquid/lfm-2-24b-a2b"
}).then(response => {
    console.log(response);
});
<html>
<body>
    <script src="https://js.puter.com/v2/"></script>
    <script>
        puter.ai.chat("Explain AI like I'm five!", {
            model: "liquid/lfm-2-24b-a2b"
        }).then(response => {
            console.log(response);
        });
    </script>
</body>
</html>

List of Liquid AI Models

Chat

LFM2-24B-A2B

liquid/lfm-2-24b-a2b

LFM2 24B A2B is a sparse Mixture-of-Experts model from Liquid AI featuring a novel hybrid architecture that combines gated short convolution blocks with Grouped Query Attention in a 3:1 ratio, developed through hardware-in-the-loop architecture search. With 24 billion total parameters but only ~2 billion active per token, it delivers high throughput while outperforming larger MoE competitors like Qwen3-30B-A3B in throughput benchmarks. It supports 9 languages, a 32K context window, native function calling, and structured outputs. A strong API choice for high-volume multi-agent pipelines, RAG backends, and multilingual applications that demand low per-token cost alongside capable general reasoning.

Chat

LFM2.5-1.2B-Instruct

liquid/lfm-2.5-1.2b-instruct:free

LFM 2.5 1.2B Instruct is a compact instruction-tuned language model from Liquid AI, designed to deliver best-in-class performance at the 1-billion-parameter scale. Trained on 28 trillion tokens with reinforcement learning, it achieves strong scores across knowledge (MMLU-Pro: 44.35), reasoning (GPQA: 38.89), and instruction following (IFEval: 86.23) — outperforming similarly sized models like Llama-3.2-1B and Gemma-3-1B on these benchmarks. The model supports tool use, structured outputs, and function calling, making it a solid choice for lightweight agentic pipelines, chatbots, and latency-sensitive API integrations where cost and throughput matter most.

Chat

LFM2.5-1.2B-Thinking

liquid/lfm-2.5-1.2b-thinking:free

LFM 2.5 1.2B Thinking is a compact reasoning model from Liquid AI that generates explicit chain-of-thought traces before producing answers, enabling more reliable performance on multi-step problems at the 1-billion-parameter scale. Compared to its instruct sibling, it shows major benchmark gains in math reasoning (MATH-500: 88 vs. 63), instruction following (Multi-IF: 69 vs. 61), and tool use (BFCLv3: 57 vs. 49). It matches or exceeds Qwen3-1.7B on most reasoning benchmarks despite having 40% fewer parameters. Well-suited for API use cases involving agentic tool calling, math, and code — anywhere a reasoning trace meaningfully improves answer quality.

Chat

LFM2-8B-A1B

liquid/lfm2-8b-a1b

LFM2-8B-A1B is a sparse Mixture-of-Experts language model from Liquid AI with 8.3B total parameters but only 1.5B active per token, using 32 experts per MoE block with top-4 active per token. This design delivers 3-4B dense model quality at the compute cost of a 1.5B model, making it faster than Qwen3-1.7B in practice. Verified benchmarks include GSM8K 84.4%, MATH500 74.2%, IFEval 77.6%, and MMLU-Pro 37.4%. For API developers, it is a strong choice for latency-sensitive applications requiring larger-model quality at minimal compute cost — ideal for high-throughput pipelines where speed and efficiency are priorities.

Chat

LFM2-2.6B

liquid/lfm-2.2-6b

LFM2-2.6B is a hybrid language model from Liquid AI, built on a novel architecture that alternates Grouped Query Attention blocks with gated short convolutional layers. Trained on 10 trillion tokens, it delivers fast inference with a significantly reduced KV cache footprint compared to pure-transformer models. Despite its 2.6B parameter count, it outperforms larger models in its class including Llama 3.2-3B-Instruct and Gemma-3-4b-it. Verified benchmarks include 82.41% on GSM8K and 79.56% on IFEval, surpassing Llama 3.2-3B's 71.43% on the latter. For API developers, it is well-suited for low-latency, cost-efficient inference tasks such as instruction following, Q&A, and math-related applications.

Frequently Asked Questions

What is this Liquid AI API about?

The Liquid AI API gives you access to models for AI chat. Through Puter.js, you can start using Liquid AI models instantly with zero setup or configuration.

Which Liquid AI models can I use?

Puter.js supports a variety of Liquid AI models, including LFM2-24B-A2B, LFM2.5-1.2B-Instruct, LFM2.5-1.2B-Thinking, and more. Find all AI models supported by Puter.js in the AI model list.

How much does it cost?

With the User-Pays model, users cover their own AI costs through their Puter account. This means you can build apps without worrying about infrastructure expenses.

What is Puter.js?

Puter.js is a JavaScript library that provides access to AI, storage, and other cloud services directly from a single API. It handles authentication, infrastructure, and scaling so you can focus on building your app.

Does this work with React / Vue / Vanilla JS / Node / etc.?

Yes — the Liquid AI API through Puter.js works with any JavaScript framework, Node.js, or plain HTML. Just include the library and start building. See the documentation for more details.