Migrate to @fal-ai/client
The @fal-ai/serverless-client
package has been deprecated in favor of @fal-ai/client
. Please check the migration guide for more information.
fal-ai/fast-lcm-diffusion
Text To Image
The client provides a convenient way to interact with the model API.
npm install --save @fal-ai/client
@fal-ai/client
The @fal-ai/serverless-client
package has been deprecated in favor of @fal-ai/client
. Please check the migration guide for more information.
Set FAL_KEY
as an environment variable in your runtime.
export FAL_KEY="YOUR_API_KEY"
The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.
import { fal } from "@fal-ai/client";
const result = await fal.subscribe("fal-ai/fast-lcm-diffusion", {
input: {
prompt: "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k."
},
logs: true,
onQueueUpdate: (update) => {
if (update.status === "IN_PROGRESS") {
update.logs.map((log) => log.message).forEach(console.log);
}
},
});
console.log(result.data);
console.log(result.requestId);
This model has a real-time mode via websockets, this is supported via the fal.realtime
client.
import { fal } from "@fal-ai/client";
const connection = fal.realtime.connect("fal-ai/fast-lcm-diffusion", {
onResult: (result) => {
console.log(result);
},
onError: (error) => {
console.error(error);
}
});
connection.send({
prompt: "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k."
});
The API uses an API Key for authentication. It is recommended you set the FAL_KEY
environment variable in your runtime when possible.
import { fal } from "@fal-ai/client";
fal.config({
credentials: "YOUR_FAL_KEY"
});
When running code on the client-side (e.g. in a browser, mobile app or GUI applications), make sure to not expose your FAL_KEY
. Instead, use a server-side proxy to make requests to the API. For more information, check out our server-side integration guide.
The client API provides a convenient way to submit requests to the model.
import { fal } from "@fal-ai/client";
const { request_id } = await fal.queue.submit("fal-ai/fast-lcm-diffusion", {
input: {
prompt: "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k."
},
webhookUrl: "https://optional.webhook.url/for/results",
});
You can fetch the status of a request to check if it is completed or still in progress.
import { fal } from "@fal-ai/client";
const status = await fal.queue.status("fal-ai/fast-lcm-diffusion", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
logs: true,
});
Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.
import { fal } from "@fal-ai/client";
const result = await fal.queue.result("fal-ai/fast-lcm-diffusion", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);
Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.
You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.
You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.
We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.
import { fal } from "@fal-ai/client";
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);
The client will auto-upload the file for you if you pass a binary object (e.g. File
, Data
).
Read more about file handling in our file upload guide.
model_name
ModelNameEnum
The name of the model to use. Default value: "stabilityai/stable-diffusion-xl-base-1.0"
Possible enum values: stabilityai/stable-diffusion-xl-base-1.0, runwayml/stable-diffusion-v1-5
prompt
string
* requiredThe prompt to use for generating the image. Be as descriptive as possible for best results.
negative_prompt
string
The negative prompt to use. Use it to address details that you don't want
in the image. This could be colors, objects, scenery and even the small details
(e.g. moustache, blurry, low resolution). Default value: ""
The size of the generated image. Default value: square_hd
Possible enum values: square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9
Note: For custom image sizes, you can pass the width
and height
as an object:
"image_size": {
"width": 1280,
"height": 720
}
num_inference_steps
integer
The number of inference steps to perform. Default value: 6
seed
integer
The same seed and the same prompt given to the same version of Stable Diffusion will output the same image every time.
guidance_scale
float
The CFG (Classifier Free Guidance) scale is a measure of how close you want
the model to stick to your prompt when looking for a related image to show you. Default value: 1.5
sync_mode
boolean
If set to true, the function will wait for the image to be generated and uploaded
before returning the response. This will increase the latency of the function but
it allows you to get the image directly in the response without going through the CDN. Default value: true
num_images
integer
The number of images to generate. Default value: 1
enable_safety_checker
boolean
If set to true, the safety checker will be enabled. Default value: true
safety_checker_version
SafetyCheckerVersionEnum
The version of the safety checker to use. v1 is the default CompVis safety checker. v2 uses a custom ViT model. Default value: "v1"
Possible enum values: v1, v2
expand_prompt
boolean
If set to true, the prompt will be expanded with additional prompts.
format
FormatEnum
The format of the generated image. Default value: "jpeg"
Possible enum values: jpeg, png
guidance_rescale
float
The rescale factor for the CFG.
request_id
string
An id bound to a request, can be used with response to identify the request
itself. Default value: ""
{
"model_name": "stabilityai/stable-diffusion-xl-base-1.0",
"prompt": "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k.",
"negative_prompt": "cartoon, illustration, animation. face. male, female",
"image_size": "square_hd",
"num_inference_steps": 6,
"guidance_scale": 1.5,
"sync_mode": true,
"num_images": 1,
"enable_safety_checker": true,
"safety_checker_version": "v1",
"format": "jpeg"
}
The generated image files info.
seed
integer
* requiredSeed of the generated Image. It will be the same value of the one passed in the input or the randomly generated that was used in case none was passed.
Whether the generated images contain NSFW concepts.
prompt
string
* requiredThe prompt used for generating the image.
{
"images": [
{
"url": "",
"content_type": "image/jpeg"
}
],
"prompt": ""
}
width
integer
The width of the generated image. Default value: 512
height
integer
The height of the generated image. Default value: 512
path
string
* requiredURL or the path to the LoRA weights. Or HF model name.
scale
float
The scale of the LoRA weight. This is used to scale the LoRA weight
before merging it with the base model. Default value: 1
force
boolean
If set to true, the embedding will be forced to be used.
path
string
* requiredURL or the path to the embedding weights.
The list of tokens to use for the embedding. Default value: <s0>,<s1>
url
string
* requiredwidth
integer
* requiredheight
integer
* requiredcontent_type
string
Default value: "image/jpeg"
Stable Diffusion v1.5
OmniGen is a unified image generation model that can generate a wide range of images from multi-modal prompts. It can be used for various tasks such as Image Editing, Personalized Image Generation, Virtual Try-On, Multi Person Generation and more!
Super fast endpoint for the FLUX.1 [dev] model with LoRA support, enabling rapid and high-quality image generation using pre-trained LoRA adaptations for personalization, specific styles, brand identities, and product-specific outputs.