LiquidAI/LFM2-8B-A1B API
Access LiquidAI/LFM2-8B-A1B from Liquid AI using Puter.js AI API.
Get Startedliquid/lfm2-8b-a1b
Model Card
Liquid LFM2-8B-A1B is Liquid AI's first on-device Mixture-of-Experts model with 8.3B total parameters but only 1.5B active per token, delivering 3-4B dense model quality at 1.5B-class compute. It runs faster than Qwen3-1.7B on mobile CPUs and is designed for private, low-latency applications on phones, tablets, and laptops.
Context Window
N/A
tokens
Max Output
N/A
tokens
Input Cost
$0.01
per million tokens
Output Cost
$0.02
per million tokens
API Usage Example
Add LiquidAI/LFM2-8B-A1B to your app with just a few lines of code.
No API keys, no backend, no configuration required.
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("Explain quantum computing in simple terms", {
model: "liquid/lfm2-8b-a1b"
}).then(response => {
document.body.innerHTML = response.message.content;
});
</script>
</body>
</html>
Get started with Puter.js
Add LiquidAI/LFM2-8B-A1B to your app without worrying about API keys or setup.
Read the Docs View Tutorials