Stable Diffusion XL Image to Image with LoRAs

Commercial use

1. Calling the API#

Install the client#

The client provides a convenient way to interact with the model API.

npm install --save @fal-ai/serverless-client

Setup your API Key#

Set FAL_KEY as an environment variable in your runtime.


Submit a request#

The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.

import * as fal from "@fal-ai/serverless-client";

const result = await fal.subscribe("fal-ai/image-to-image", {
  input: {
    model_name: "stabilityai/stable-diffusion-xl-base-1.0",
    prompt: "an island near sea, with seagulls, moon shining over the sea, light house, boats int he background, fish flying over the sea",
    image_url: ""
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") { => log.message).forEach(console.log);

2. Authentication#

The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.

API Key#

In case your app is running in an environment where you cannot set environment variables, you can set the API Key manually as a client configuration.
import * as fal from "@fal-ai/serverless-client";

  credentials: "YOUR_FAL_KEY"

3. Files#

Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.

Data URI (base64)#

You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.

Hosted files (URL)#

You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.

Uploading files#

We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.

import * as fal from "@fal-ai/serverless-client";

// Upload a file (you can get a file reference from an input element or a drag-and-drop event)
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await;

// Use the URL in your request
const result = await fal.subscribe("fal-ai/image-to-image", { image_url: url });

Read more about file handling in our file upload guide.

4. Schema#



URL or HuggingFace ID of the base model to generate the image.


The architecture of the model to use. If an HF model is used, it will be automatically detected. Otherwise will assume depending on the model name (whether XL is in the name or not).

Possible values: "sd", "sdxl"


The prompt to use for generating the image. Be as descriptive as possible for best results.


The URL of the image to use as a starting point for the generation.


The negative prompt to use. Use it to address details that you don't want in the image. This could be colors, objects, scenery and even the small details (e.g. moustache, blurry, low resolution). Default value: ""


The LoRAs to use for the image generation. You can use any number of LoRAs and they will be merged together to generate the final image.


Increasing the amount of steps tells Stable Diffusion that it should take more steps to generate your final result which can increase the amount of detail in your image. Default value: 30


The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 7.5


The strength of the image. Default value: 0.8

image_sizeImageSize | Enum

The size of the generated image. You can choose between some presets or custom height and width that must be multiples of 8. Default value: square_hd

Possible values: "square_hd", "square", "portrait_4_3", "portrait_16_9", "landscape_4_3", "landscape_16_9"


Skips part of the image generation process, leading to slightly different results. This means the image renders faster, too.


Scheduler / sampler to use for the image denoising process.

Possible values: "DPM++ 2M", "DPM++ 2M Karras", "DPM++ 2M SDE", "DPM++ 2M SDE Karras", "Euler", "Euler A", "LCM"


Number of images to generate in one request. Note that the higher the batch size, the longer it will take to generate the images. Default value: 1


The same seed and the same prompt given to the same version of Stable Diffusion will output the same image every time.


The format of the generated image. Default value: "png"

Possible values: "jpeg", "png"


If set to true, the safety checker will be enabled.

  "model_name": "stabilityai/stable-diffusion-xl-base-1.0",
  "prompt": "an island near sea, with seagulls, moon shining over the sea, light house, boats int he background, fish flying over the sea",
  "image_url": "",
  "negative_prompt": "cartoon, painting, illustration, worst quality, low quality, normal quality",
  "loras": [
      "path": "",
      "scale": 1
  "num_inference_steps": 30,
  "guidance_scale": 7.5,
  "strength": 0.8,
  "image_size": "square_hd",
  "num_images": 1,
  "image_format": "jpeg"



The generated image files info.


Seed of the generated Image. It will be the same value of the one passed in the input or the randomly generated that was used in case none was passed.


Whether the generated images contain NSFW concepts.

  "images": [
      "url": "",
      "content_type": "image/jpeg"