Fooocus

fal-ai/fooocus
Inference
Commercial use

1. Calling the API#

Install the client#

The client provides a convenient way to interact with the model API.

npm install --save @fal-ai/serverless-client

Setup your API Key#

Set FAL_KEY as an environment variable in your runtime.

export FAL_KEY="YOUR_API_KEY"

Submit a request#

The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.

import * as fal from "@fal-ai/serverless-client";

const result = await fal.subscribe("fal-ai/fooocus", {
  input: {

  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});

2. Authentication#

The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.

API Key#

In case your app is running in an environment where you cannot set environment variables, you can set the API Key manually as a client configuration.
import * as fal from "@fal-ai/serverless-client";

fal.config({
  credentials: "YOUR_FAL_KEY"
});

3. Files#

Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.

Data URI (base64)#

You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.

Hosted files (URL)#

You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.

Uploading files#

We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.

import * as fal from "@fal-ai/serverless-client";

// Upload a file (you can get a file reference from an input element or a drag-and-drop event)
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);

// Use the URL in your request
const result = await fal.subscribe("fal-ai/fooocus", { image_url: url });

Read more about file handling in our file upload guide.

4. Schema#

Input#

promptstring

The prompt to use for generating the image. Be as descriptive as possible for best results. Default value: ""

negative_promptstring

The negative prompt to use. Use it to address details that you don't want in the image. This could be colors, objects, scenery and even the small details (e.g. moustache, blurry, low resolution). Default value: ""

styleslist<Enum>

The style to use. Default value: Fooocus Enhance,Fooocus Sharp,Fooocus V2

performancePerformanceEnum

You can choose Speed or Quality Default value: "Extreme Speed"

Possible values: "Speed", "Quality", "Extreme Speed", "Lightning"

guidance_scalefloat

The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 4

sharpnessfloat

The sharpness of the generated image. Use it to control how sharp the generated image should be. Higher value means image and texture are sharper. Default value: 2

aspect_ratiostring

The size of the generated image. You can choose between some presets or custom height and width that must be multiples of 8. Default value: "1024x1024"

num_imagesinteger

Number of images to generate in one request Default value: 1

loraslist<LoraWeight>

The LoRAs to use for the image generation. You can use up to 5 LoRAs and they will be merged together to generate the final image. Default value: [object Object]

refiner_modelRefinerModelEnum

Refiner (SDXL or SD 1.5) Default value: "None"

Possible values: "None", "realisticVisionV60B1_v51VAE.safetensors"

refiner_switchfloat

Use 0.4 for SD1.5 realistic models; 0.667 for SD1.5 anime models 0.8 for XL-refiners; or any value for switching two SDXL models. Default value: 0.8

output_formatOutputFormatEnum

The format of the generated image. Default value: "jpeg"

Possible values: "png", "jpeg", "webp"

sync_modeboolean

If set to true, the function will wait for the image to be generated and uploaded before returning the response. This will increase the latency of the function but it allows you to get the image directly in the response without going through the CDN.

seedinteger

The same seed and the same prompt given to the same version of Stable Diffusion will output the same image every time.

control_image_urlstring

The image to use as a reference for the generated image.

control_typeControlTypeEnum

The type of image control Default value: "PyraCanny"

Possible values: "ImagePrompt", "PyraCanny", "CPDS", "FaceSwap"

control_image_weightfloat

The strength of the control image. Use it to control how much the generated image should look like the control image. Default value: 1

control_image_stop_atfloat

The stop at value of the control image. Use it to control how much the generated image should look like the control image. Default value: 1

inpaint_image_urlstring

The image to use as a reference for inpainting.

mask_image_urlstring

The image to use as a mask for the generated image.

mixing_image_prompt_and_inpaintboolean
enable_safety_checkerboolean

If set to false, the safety checker will be disabled. Default value: true

{
  "prompt": "an astronaut in the jungle, cold color palette with butterflies in the background, highly detailed, 8k",
  "negative_prompt": "(worst quality, low quality, normal quality, lowres, low details, oversaturated, undersaturated, overexposed, underexposed, grayscale, bw, bad photo, bad photography, bad art:1.4), (watermark, signature, text font, username, error, logo, words, letters, digits, autograph, trademark, name:1.2), (blur, blurry, grainy), morbid, ugly, asymmetrical, mutated malformed, mutilated, poorly lit, bad shadow, draft, cropped, out of frame, cut off, censored, jpeg artifacts, out of focus, glitch, duplicate, (airbrushed, cartoon, anime, semi-realistic, cgi, render, blender, digital art, manga, amateur:1.3), (3D ,3D Game, 3D Game Scene, 3D Character:1.1), (bad hands, bad anatomy, bad body, bad face, bad teeth, bad arms, bad legs, deformities:1.3)",
  "styles": [
    "Fooocus Enhance",
    "Fooocus Sharp",
    "Fooocus V2"
  ],
  "performance": "Extreme Speed",
  "guidance_scale": 4,
  "sharpness": 2,
  "aspect_ratio": "1024x1024",
  "num_images": 1,
  "loras": [
    {
      "path": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors",
      "scale": 0.1
    }
  ],
  "refiner_model": "None",
  "refiner_switch": 0.8,
  "output_format": "jpeg",
  "seed": 176400,
  "control_type": "ImagePrompt",
  "control_image_weight": 1,
  "control_image_stop_at": 1,
  "enable_safety_checker": true
}

Output#

images*list<Image>

The generated image file info.

timings*Timings

The time taken for the generation process.

has_nsfw_concepts*list<boolean>

Whether the generated images contain NSFW concepts.

{
  "images": [
    {
      "url": "",
      "content_type": "image/png",
      "file_name": "z9RV14K95DvU.png",
      "file_size": 4404019,
      "width": 1024,
      "height": 1024
    }
  ]
}