Blog

Liquid AI LFM 2 24B Is Now Available in Puter.js

On this page

Puter.js now supports LFM2-24B-A2B, Liquid AI's largest open-weight model—a 24 billion parameter Mixture-of-Experts model that activates only 2.3B parameters per token.

What is LFM2-24B-A2B?

LFM2-24B-A2B is Liquid AI's most capable open model, scaling up the LFM2 hybrid architecture to 24 billion total parameters. It uses a sparse Mixture-of-Experts design with 64 experts per MoE block and top-4 routing, keeping only 2.3B parameters active per token for efficient inference.

Examples

Text Generation

puter.ai.chat("Explain the advantages of Mixture-of-Experts architectures over dense transformers", {
    model: "liquid/lfm-2-24b-a2b"
})

Code Generation

puter.ai.chat("Write a Python function that implements binary search on a sorted array",
    { model: "liquid/lfm-2-24b-a2b" }
)

Multilingual

puter.ai.chat("Translate the following to French and German: 'The quick brown fox jumps over the lazy dog'",
    { model: "liquid/lfm-2-24b-a2b" }
)

Get Started Now

Just add one library to your project:

// npm install @heyputer/puter.js
import { puter } from '@heyputer/puter.js';

Or add one script tag to your HTML:

<script src="https://js.puter.com/v2/"></script>

No API keys and no infrastructure setup. Start building with LFM2-24B-A2B immediately.

Learn more:

Free, Serverless AI and Cloud

Start creating powerful web applications with Puter.js in seconds!

Get Started Now

Read the Docs Try the Playground