liquid/lfm-2-24b-a2b
Model Card
LFM2 24B A2B is a sparse Mixture-of-Experts model from Liquid AI featuring a novel hybrid architecture that combines gated short convolution blocks with Grouped Query Attention in a 3:1 ratio, developed through hardware-in-the-loop architecture search.
With 24 billion total parameters but only ~2 billion active per token, it delivers high throughput while outperforming larger MoE competitors like Qwen3-30B-A3B in throughput benchmarks. It supports 9 languages, a 32K context window, native function calling, and structured outputs.
A strong API choice for high-volume multi-agent pipelines, RAG backends, and multilingual applications that demand low per-token cost alongside capable general reasoning.
Context Window 33K
tokens
Max Output N/A
tokens
Input Cost $0.03
per million tokens
Output Cost $0.12
per million tokens
Release Date Feb 25, 2026
API Usage Example
Add LFM2-24B-A2B to your app with just a few lines of code.
No backend, no configuration required.
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms", {
model: "liquid/lfm-2-24b-a2b"
}).then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "liquid/lfm-2-24b-a2b"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
# pip install openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.puter.com/puterai/openai/v1/",
api_key="YOUR_PUTER_AUTH_TOKEN",
)
response = client.chat.completions.create(
model="liquid/lfm-2-24b-a2b",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
],
)
print(response.choices[0].message.content)
curl https://api.puter.com/puterai/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_PUTER_AUTH_TOKEN" \
-d '{
"model": "liquid/lfm-2-24b-a2b",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
More AI Models From Liquid AI
LFM2.5-1.2B-Instruct
LFM 2.5 1.2B Instruct is a compact instruction-tuned language model from Liquid AI, designed to deliver best-in-class performance at the 1-billion-parameter scale. Trained on 28 trillion tokens with reinforcement learning, it achieves strong scores across knowledge (MMLU-Pro: 44.35), reasoning (GPQA: 38.89), and instruction following (IFEval: 86.23) — outperforming similarly sized models like Llama-3.2-1B and Gemma-3-1B on these benchmarks. The model supports tool use, structured outputs, and function calling, making it a solid choice for lightweight agentic pipelines, chatbots, and latency-sensitive API integrations where cost and throughput matter most.
ChatLFM2.5-1.2B-Thinking
LFM 2.5 1.2B Thinking is a compact reasoning model from Liquid AI that generates explicit chain-of-thought traces before producing answers, enabling more reliable performance on multi-step problems at the 1-billion-parameter scale. Compared to its instruct sibling, it shows major benchmark gains in math reasoning (MATH-500: 88 vs. 63), instruction following (Multi-IF: 69 vs. 61), and tool use (BFCLv3: 57 vs. 49). It matches or exceeds Qwen3-1.7B on most reasoning benchmarks despite having 40% fewer parameters. Well-suited for API use cases involving agentic tool calling, math, and code — anywhere a reasoning trace meaningfully improves answer quality.
Frequently Asked Questions
You can access LFM2-24B-A2B by Liquid AI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add LFM2-24B-A2B to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.03 |
| Output | $0.12 |
LFM2-24B-A2B was created by Liquid AI and released on Feb 25, 2026.
LFM2-24B-A2B supports a context window of 33K tokens. For reference, that is roughly equivalent to 66 pages of text.
Yes — the LFM2-24B-A2B API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add LFM2-24B-A2B to your app without worrying about API keys or setup.
Read the Docs View Tutorials