Skip to main content
Endpoint: POST https://fal.run/fal-ai/bytedance/dreamactor/v2 Endpoint ID: fal-ai/bytedance/dreamactor/v2

Try it in the Playground

Run this model interactively with your own prompts.

Quick Start

import fal_client

def on_queue_update(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
           print(log["message"])

result = fal_client.subscribe(
    "fal-ai/bytedance/dreamactor/v2",
    arguments={
        "image_url": "https://v3b.fal.media/files/b/0a8d6292/E9WNRJh8K8DF9lSV0bkXs_image.png",
        "video_url": "https://v3b.fal.media/files/b/0a8d633f/u5Ye7jXL0Cfo0ijz5M6YY_input_example_dreamactor.mp4"
    },
    with_logs=True,
    on_queue_update=on_queue_update,
)
print(result)

Input Schema

image_url
string
required
The URL of the reference image to animate. Supports real people, animation, pets, etc. Format: jpeg, jpg or png. Max size: 4.7 MB. Resolution: between 480x480 and 1920x1080 (larger images will be proportionally reduced).
video_url
string
required
The URL of the driving template video providing motion, facial expressions, and lip movement reference. Max duration: 30 seconds. Format: mp4, mov or webm. Resolution: between 200x200 and 2048x1440. Supports full face and body driving.
trim_first_second
boolean
default:"true"
Whether to crop the first second of the output video. The output has a 1-second transition at the beginning; enable this to remove it. Default value: true

Output Schema

video
File
required
Generated video file.

Input Example

{
  "image_url": "https://v3b.fal.media/files/b/0a8d6292/E9WNRJh8K8DF9lSV0bkXs_image.png",
  "video_url": "https://v3b.fal.media/files/b/0a8d633f/u5Ye7jXL0Cfo0ijz5M6YY_input_example_dreamactor.mp4",
  "trim_first_second": true
}

Output Example

{
  "video": {
    "url": "https://v3b.fal.media/files/b/0a8d6313/ONsZwYeJrFqi1W1jbnfYF_9HU7tPvX1hUlMMxXCepTz_video%20(1)%20(1).mp4"
  }
}