Stable Diffusion with LoRAs

fal-ai/lora
Inference
Commercial use

1. Calling the API#

Install the client#

The client provides a convenient way to interact with the model API.

npm install --save @fal-ai/serverless-client

Setup your API Key#

Set FAL_KEY as an environment variable in your runtime.

export FAL_KEY="YOUR_API_KEY"

Submit a request#

The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.

import * as fal from "@fal-ai/serverless-client";

const result = await fal.subscribe("fal-ai/lora", {
  input: {
    model_name: "stabilityai/stable-diffusion-xl-base-1.0",
    prompt: "Photo of a european medieval 40 year old queen, silver hair, highly detailed face, detailed eyes, head shot, intricate crown, age spots, wrinkles"
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});

2. Authentication#

The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.

API Key#

In case your app is running in an environment where you cannot set environment variables, you can set the API Key manually as a client configuration.
import * as fal from "@fal-ai/serverless-client";

fal.config({
  credentials: "YOUR_FAL_KEY"
});

3. Files#

Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.

Data URI (base64)#

You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.

Hosted files (URL)#

You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.

Uploading files#

We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.

import * as fal from "@fal-ai/serverless-client";

// Upload a file (you can get a file reference from an input element or a drag-and-drop event)
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);

// Use the URL in your request
const result = await fal.subscribe("fal-ai/lora", { image_url: url });

Read more about file handling in our file upload guide.

4. Schema#

Input#

model_name*string

URL or HuggingFace ID of the base model to generate the image.

unet_namestring

URL or HuggingFace ID of the custom U-Net model to use for the image generation.

variantstring

The variant of the model to use for huggingface models, e.g. 'fp16'.

prompt*string

The prompt to use for generating the image. Be as descriptive as possible for best results.

negative_promptstring

The negative prompt to use.Use it to address details that you don't want in the image. This could be colors, objects, scenery and even the small details (e.g. moustache, blurry, low resolution). Default value: ""

prompt_weightingboolean

If set to true, the prompt weighting syntax will be used. Additionally, this will lift the 77 token limit by averaging embeddings.

loraslist<LoraWeight>

The LoRAs to use for the image generation. You can use any number of LoRAs and they will be merged together to generate the final image. Default value: ``

embeddingslist<Embedding>

The embeddings to use for the image generation. Only a single embedding is supported at the moment. The embeddings will be used to map the tokens in the prompt to the embedding weights. Default value: ``

controlnetslist<ControlNet>

The control nets to use for the image generation. You can use any number of control nets and they will be applied to the image at the specified timesteps. Default value: ``

controlnet_guess_modeboolean

If set to true, the controlnet will be applied to only the conditional predictions.

ip_adapterlist<IPAdapter>

The IP adapter to use for the image generation. Default value: ``

image_encoder_pathstring

The path to the image encoder model to use for the image generation.

image_encoder_subfolderstring

The subfolder of the image encoder model to use for the image generation.

image_encoder_weight_namestring

The weight name of the image encoder model to use for the image generation. Default value: "pytorch_model.bin"

ic_light_model_urlstring

The URL of the IC Light model to use for the image generation.

ic_light_model_background_image_urlstring

The URL of the IC Light model background image to use for the image generation. Make sure to use a background compatible with the model.

ic_light_image_urlstring

The URL of the IC Light model image to use for the image generation.

seedinteger

The same seed and the same prompt given to the same version of Stable Diffusion will output the same image every time.

image_sizeImageSize | Enum

The size of the generated image. You can choose between some presets or custom height and width that must be multiples of 8. Default value: square_hd

Possible values: "square_hd", "square", "portrait_4_3", "portrait_16_9", "landscape_4_3", "landscape_16_9"

num_inference_stepsinteger

Increasing the amount of steps tells Stable Diffusion that it should take more steps to generate your final result which can increase the amount of detail in your image. Default value: 30

guidance_scalefloat

The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 7.5

clip_skipinteger

Skips part of the image generation process, leading to slightly different results. This means the image renders faster, too.

schedulerSchedulerEnum

Scheduler / sampler to use for the image denoising process.

Possible values: "DPM++ 2M", "DPM++ 2M Karras", "DPM++ 2M SDE", "DPM++ 2M SDE Karras", "Euler", "Euler A", "Euler (trailing timesteps)", "LCM", "LCM (trailing timesteps)", "DDIM"

Optionally override the timesteps to use for the denoising process. Only works with schedulers which support the timesteps argument in their set_timesteps method. Defaults to not overriding, in which case the scheduler automatically sets the timesteps based on the num_inference_steps parameter. If set to a custom timestep schedule, the num_inference_steps parameter will be ignored. Cannot be set if sigmas is set. Default value: [object Object]

Optionally override the sigmas to use for the denoising process. Only works with schedulers which support the sigmas argument in their set_sigmas method. Defaults to not overriding, in which case the scheduler automatically sets the sigmas based on the num_inference_steps parameter. If set to a custom sigma schedule, the num_inference_steps parameter will be ignored. Cannot be set if timesteps is set. Default value: [object Object]

image_formatImageFormatEnum

The format of the generated image. Default value: "png"

Possible values: "jpeg", "png"

num_imagesinteger

Number of images to generate in one request. Note that the higher the batch size, the longer it will take to generate the images. Default value: 1

enable_safety_checkerboolean

If set to true, the safety checker will be enabled.

tile_widthinteger

The size of the tiles to be used for the image generation. Default value: 4096

tile_heightinteger

The size of the tiles to be used for the image generation. Default value: 4096

tile_stride_widthinteger

The stride of the tiles to be used for the image generation. Default value: 2048

tile_stride_heightinteger

The stride of the tiles to be used for the image generation. Default value: 2048

debug_latentsboolean

If set to true, the latents will be saved for debugging.

debug_per_pass_latentsboolean

If set to true, the latents will be saved for debugging per pass.

{
  "model_name": "stabilityai/stable-diffusion-xl-base-1.0",
  "prompt": "Photo of a european medieval 40 year old queen, silver hair, highly detailed face, detailed eyes, head shot, intricate crown, age spots, wrinkles",
  "negative_prompt": "cartoon, painting, illustration, worst quality, low quality, normal quality",
  "prompt_weighting": true,
  "loras": [],
  "embeddings": [],
  "controlnets": [],
  "ip_adapter": [],
  "image_encoder_weight_name": "pytorch_model.bin",
  "image_size": "square_hd",
  "num_inference_steps": 30,
  "guidance_scale": 7.5,
  "timesteps": {
    "method": "default",
    "array": []
  },
  "sigmas": {
    "method": "default",
    "array": []
  },
  "image_format": "jpeg",
  "num_images": 1,
  "tile_width": 4096,
  "tile_height": 4096,
  "tile_stride_width": 2048,
  "tile_stride_height": 2048
}

Output#

images*list<Image>

The generated image files info.

seed*integer

Seed of the generated Image. It will be the same value of the one passed in the input or the randomly generated that was used in case none was passed.

has_nsfw_concepts*list<boolean>

Whether the generated images contain NSFW concepts.

debug_latentsFile

The latents saved for debugging.

debug_per_pass_latentsFile

The latents saved for debugging per pass.

{
  "images": [
    {
      "url": "",
      "content_type": "image/png",
      "file_name": "z9RV14K95DvU.png",
      "file_size": 4404019,
      "width": 1024,
      "height": 1024
    }
  ],
  "debug_latents": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  },
  "debug_per_pass_latents": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  }
}