Liquid AI: LFM2-8B-A1B
This model is no longer available.Add AI to your application with Puter.js.
Explore Other ModelsModel Card
LFM2-8B-A1B is a sparse Mixture-of-Experts language model from Liquid AI with 8.3B total parameters but only 1.5B active per token, using 32 experts per MoE block with top-4 active per token.
This design delivers 3-4B dense model quality at the compute cost of a 1.5B model, making it faster than Qwen3-1.7B in practice. Verified benchmarks include GSM8K 84.4%, MATH500 74.2%, IFEval 77.6%, and MMLU-Pro 37.4%.
For API developers, it is a strong choice for latency-sensitive applications requiring larger-model quality at minimal compute cost — ideal for high-throughput pipelines where speed and efficiency are priorities.
Context Window 33K
tokens
Max Output N/A
tokens
Input Cost $0.01
per million tokens
Output Cost $0.02
per million tokens
Release Date Jun 1, 2025
Code Example
Add AI to your app with the Puter.js AI API — no API keys or setup required.
// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';
puter.ai.chat("Explain quantum computing in simple terms").then(response => {
document.body.innerHTML = response.message.content;
});
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms").then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
More AI Models From Liquid AI
LFM2-24B-A2B
LFM2 24B A2B is a sparse Mixture-of-Experts model from Liquid AI featuring a novel hybrid architecture that combines gated short convolution blocks with Grouped Query Attention in a 3:1 ratio, developed through hardware-in-the-loop architecture search. With 24 billion total parameters but only ~2 billion active per token, it delivers high throughput while outperforming larger MoE competitors like Qwen3-30B-A3B in throughput benchmarks. It supports 9 languages, a 32K context window, native function calling, and structured outputs. A strong API choice for high-volume multi-agent pipelines, RAG backends, and multilingual applications that demand low per-token cost alongside capable general reasoning.
ChatLFM2.5-1.2B-Instruct
LFM 2.5 1.2B Instruct is a compact instruction-tuned language model from Liquid AI, designed to deliver best-in-class performance at the 1-billion-parameter scale. Trained on 28 trillion tokens with reinforcement learning, it achieves strong scores across knowledge (MMLU-Pro: 44.35), reasoning (GPQA: 38.89), and instruction following (IFEval: 86.23) — outperforming similarly sized models like Llama-3.2-1B and Gemma-3-1B on these benchmarks. The model supports tool use, structured outputs, and function calling, making it a solid choice for lightweight agentic pipelines, chatbots, and latency-sensitive API integrations where cost and throughput matter most.
ChatLFM2.5-1.2B-Thinking
LFM 2.5 1.2B Thinking is a compact reasoning model from Liquid AI that generates explicit chain-of-thought traces before producing answers, enabling more reliable performance on multi-step problems at the 1-billion-parameter scale. Compared to its instruct sibling, it shows major benchmark gains in math reasoning (MATH-500: 88 vs. 63), instruction following (Multi-IF: 69 vs. 61), and tool use (BFCLv3: 57 vs. 49). It matches or exceeds Qwen3-1.7B on most reasoning benchmarks despite having 40% fewer parameters. Well-suited for API use cases involving agentic tool calling, math, and code — anywhere a reasoning trace meaningfully improves answer quality.
Frequently Asked Questions
You can access LFM2-8B-A1B by Liquid AI through Puter.js AI API. Include the library in your web app or Node.js project and start making calls with just a few lines of JavaScript — no backend and no configuration required. You can also use it with Python or cURL via Puter's OpenAI-compatible API.
Yes, it is free if you're using it through Puter.js. With the User-Pays Model, you can add LFM2-8B-A1B to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.
| Price per 1M tokens | |
|---|---|
| Input | $0.01 |
| Output | $0.02 |
LFM2-8B-A1B was created by Liquid AI and released on Jun 1, 2025.
LFM2-8B-A1B supports a context window of 33K tokens. For reference, that is roughly equivalent to 66 pages of text.
Yes — the LFM2-8B-A1B API works with any JavaScript framework, Node.js, or plain HTML through Puter.js. Just include the library and start building. See the documentation for more details.
Get started with Puter.js
Add AI to your application without worrying about API keys or setup.
Explore Models View Tutorials