Tutorials
Free, Unlimited Liquid AI API
This tutorial will show you how to use Puter.js to access Liquid AI's Large Foundation Models (LFM) for free, without needing API keys or backend. Puter.js is completely free for apps, allowing you to provide your users with powerful AI capabilities without any API keys or usage restrictions. Using Puter.js, you can access Liquid AI's models directly from your frontend code without any server-side setup.
Puter is the pioneer of the "User Pays" model, which allows developers to incorporate AI capabilities into their applications while each user will cover their own usage costs. This model enables developers to offer advanced AI capabilities to users at no cost to themselves, without any API keys or server-side setup.
Getting Started
You can use puter.js without any API keys or sign-ups. To start using Puter.js, include the following script tag in your HTML file, either in the <head>
or <body>
section:
<script src="https://js.puter.com/v2/"></script>
Nothing else is required to start using Puter.js for free access to Liquid AI models.
Example 1Basic Text Generation with Liquid AI
To generate text using Liquid AI's Large Foundation Model, use the puter.ai.chat()
function:
puter.ai.chat("What are the benefits of renewable energy?", { model: "liquid/lfm-7b" })
.then(response => {
puter.print(response);
});
Full code example:
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
puter.ai.chat("What are the benefits of renewable energy?", { model: "liquid/lfm-7b" })
.then(response => {
puter.print(response);
});
</script>
</body>
</html>
Example 2Streaming Responses
For longer responses, you can use streaming to get results in real-time:
<html>
<body>
<div id="response"></div>
<script src="https://js.puter.com/v2/"></script>
<script>
async function streamResponse() {
const outputDiv = document.getElementById('response');
const response = await puter.ai.chat(
"Write a comprehensive explanation of machine learning algorithms",
{model: 'liquid/lfm-7b', stream: true}
);
for await (const part of response) {
if (part?.text) {
outputDiv.innerHTML += part.text;
}
}
}
streamResponse();
</script>
</body>
</html>
This example demonstrates streaming with Liquid AI's model, which provides a better user experience by showing the response as it's generated rather than waiting for the complete response.
Example 3Multi-Message Conversations
You can create conversational experiences by passing multiple messages:
<html>
<body>
<script src="https://js.puter.com/v2/"></script>
<script>
const conversationHistory = [
{
role: 'system',
content: 'You are a helpful coding assistant specialized in JavaScript.'
},
{
role: 'user',
content: 'How do I create an async function in JavaScript?'
},
{
role: 'assistant',
content: 'You can create an async function using the async keyword before the function declaration...'
},
{
role: 'user',
content: 'Can you show me an example with error handling?'
}
];
puter.ai.chat(conversationHistory, { model: 'liquid/lfm-3b' })
.then(response => {
puter.print(response);
});
</script>
</body>
</html>
This example shows how to maintain conversation context by providing the full message history, enabling more natural and contextual responses.
Example 4Function Calling
Liquid AI models support function calling, allowing the AI to interact with external tools and APIs:
<!DOCTYPE html>
<html>
<head>
<title>Liquid AI Function Calling Demo</title>
<script src="https://js.puter.com/v2/"></script>
<style>
body { font-family: Arial, sans-serif; max-width: 600px; margin: 20px auto; padding: 20px; }
.container { border: 1px solid #ccc; padding: 20px; border-radius: 5px; }
input { width: 100%; padding: 10px; margin: 10px 0; box-sizing: border-box; }
button { width: 100%; padding: 10px; background: #007bff; color: white; border: none; border-radius: 5px; cursor: pointer;}
button:disabled { background: #ccc; }
#response { margin-top: 20px; padding: 15px; background: #f8f9fa; border-radius: 5px; display: none;}
</style>
</head>
<body>
<div class="container">
<h1>Liquid AI Function Calling Demo</h1>
<input type="text" id="userInput" value="What's the current temperature in Tokyo?" placeholder="Ask about temperature or calculations" />
<button id="submit">Submit</button>
<div id="response"></div>
</div>
<script>
// Mock temperature function
function getTemperature(location) {
const mockTemperatureData = {
'Tokyo': '26°C',
'London': '15°C',
'New York': '22°C',
'Paris': '19°C'
};
return mockTemperatureData[location] || '18°C';
}
// Mock calculator function
function calculate(expression) {
try {
return eval(expression).toString();
} catch (error) {
return "Invalid calculation";
}
}
// Define the tools available to the AI
const tools = [
{
type: "function",
function: {
name: "get_temperature",
description: "Get current temperature for a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City name e.g. Tokyo, London"
}
},
required: ["location"]
}
}
},
{
type: "function",
function: {
name: "calculate",
description: "Perform mathematical calculations",
parameters: {
type: "object",
properties: {
expression: {
type: "string",
description: "Mathematical expression to evaluate"
}
},
required: ["expression"]
}
}
}
];
async function handleSubmit() {
const userInput = document.getElementById('userInput').value;
const submitBtn = document.getElementById('submit');
const responseDiv = document.getElementById('response');
if (!userInput) return;
submitBtn.disabled = true;
submitBtn.textContent = 'Processing...';
responseDiv.style.display = 'none';
try {
const completion = await puter.ai.chat(userInput, {
tools: tools,
model: 'liquid/lfm-7b'
});
let finalResponse;
// Check if AI wants to call a function
if (completion.message.tool_calls?.length > 0) {
const toolCall = completion.message.tool_calls[0];
const args = JSON.parse(toolCall.function.arguments);
let result;
if (toolCall.function.name === 'get_temperature') {
result = getTemperature(args.location);
} else if (toolCall.function.name === 'calculate') {
result = calculate(args.expression);
}
// Send function result back to AI for final response
finalResponse = await puter.ai.chat([
{ role: "user", content: userInput },
completion.message,
{
role: "tool",
tool_call_id: toolCall.id,
content: result
}
], { model: 'liquid/lfm-7b' });
} else {
finalResponse = completion;
}
responseDiv.innerHTML = `<strong>Response:</strong><br>${finalResponse}`;
responseDiv.style.display = 'block';
} catch (error) {
responseDiv.innerHTML = `<strong>Error:</strong><br>${error.message}`;
responseDiv.style.display = 'block';
}
submitBtn.disabled = false;
submitBtn.textContent = 'Submit';
}
// Event handlers
document.getElementById('submit').addEventListener('click', handleSubmit);
document.getElementById('userInput').addEventListener('keypress', function(e) {
if (e.key === 'Enter') handleSubmit();
});
</script>
</body>
</html>
This example demonstrates how Liquid AI models can call external functions to provide more accurate and contextual responses.
Available Models
The following Liquid AI models are supported by Puter.js:
liquid/lfm-7b
liquid/lfm-3b
You now have free access to Liquid AI's Large Foundation Models using Puter.js. This allows you to leverage advanced AI capabilities without needing API keys or backend infrastructure. True serverless AI!
Related
Free, Serverless AI and Cloud
Start creating powerful web applications with Puter.js in seconds!
Get Started Now