AnimateDiff

fal-ai/fast-animatediff/text-to-video
Inference
Commercial use

1. Calling the API#

Install the client#

The client provides a convenient way to interact with the model API.

npm install --save @fal-ai/serverless-client

Setup your API Key#

Set FAL_KEY as an environment variable in your runtime.

export FAL_KEY="YOUR_API_KEY"

Submit a request#

The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.

import * as fal from "@fal-ai/serverless-client";

const result = await fal.subscribe("fal-ai/fast-animatediff/text-to-video", {
  input: {
    prompt: "masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes"
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});

2. Authentication#

The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.

API Key#

In case your app is running in an environment where you cannot set environment variables, you can set the API Key manually as a client configuration.
import * as fal from "@fal-ai/serverless-client";

fal.config({
  credentials: "YOUR_FAL_KEY"
});

3. Files#

Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.

Data URI (base64)#

You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.

Hosted files (URL)#

You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.

Uploading files#

We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.

import * as fal from "@fal-ai/serverless-client";

// Upload a file (you can get a file reference from an input element or a drag-and-drop event)
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);

// Use the URL in your request
const result = await fal.subscribe("fal-ai/fast-animatediff/text-to-video", { image_url: url });

Read more about file handling in our file upload guide.

4. Schema#

Input#

prompt*string

The prompt to use for generating the video. Be as descriptive as possible for best results.

negative_promptstring

The negative prompt to use. Use it to address details that you don't want in the image. This could be colors, objects, scenery and even the small details (e.g. moustache, blurry, low resolution). Default value: "(bad quality, worst quality:1.2), ugly faces, bad anime"

num_framesinteger

The number of frames to generate for the video. Default value: 16

num_inference_stepsinteger

The number of inference steps to perform. Default value: 25

guidance_scalefloat

The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 7.5

seedinteger

The same seed and the same prompt given to the same version of Stable Diffusion will output the same image every time.

fpsinteger

Number of frames per second to extract from the video. Default value: 8

motionslist<Enum>

The motions to apply to the video.

video_sizeImageSize | Enum

The size of the video to generate. Default value: square

Possible values: "square_hd", "square", "portrait_4_3", "portrait_16_9", "landscape_4_3", "landscape_16_9"

{
  "prompt": "masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes",
  "negative_prompt": "(bad quality, worst quality:1.2), ugly faces, bad anime",
  "num_frames": 16,
  "num_inference_steps": 25,
  "guidance_scale": 7.5,
  "fps": 8,
  "video_size": "square"
}

Output#

video*File

Generated video file.

seed*integer

Seed used for generating the video.

{
  "video": {
    "url": "https://fal-cdn.batuhan-941.workers.dev/files/kangaroo/DSrFBOk9XXIplm_kukI4n.mp4"
  }
}