Wan VACE 14B Video to Video
About
Endpoint for generating video from depth maps.
1. Calling the API#
Install the client#
The client provides a convenient way to interact with the model API.
npm install --save @fal-ai/clientMigrate to @fal-ai/client
The @fal-ai/serverless-client package has been deprecated in favor of @fal-ai/client. Please check the migration guide for more information.
Setup your API Key#
Set FAL_KEY as an environment variable in your runtime.
export FAL_KEY="YOUR_API_KEY"Submit a request#
The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.
import { fal } from "@fal-ai/client";
const result = await fal.subscribe("fal-ai/wan-vace-14b/depth", {
input: {
prompt: "A confident woman strides toward the camera down a sun-drenched, empty street. Her vibrant summer dress, a flowing emerald green with delicate white floral embroidery, billows slightly in the gentle breeze. She carries a stylish, woven straw bag, its natural tan contrasting beautifully with the dress. The dress's fabric shimmers subtly, catching the light. The white embroidery is intricate, each tiny flower meticulously detailed. Her expression is focused, yet relaxed, radiating self-assuredness. Her auburn hair, partially pulled back in a loose braid, catches the sunlight, creating warm highlights. The street itself is paved with warm, grey cobblestones, reflecting the bright sun. The mood is optimistic and serene, emphasizing the woman's independence and carefree spirit. High resolution 4k",
video_url: "https://storage.googleapis.com/falserverless/example_inputs/wan-vace-depth-video.mp4"
},
logs: true,
onQueueUpdate: (update) => {
if (update.status === "IN_PROGRESS") {
update.logs.map((log) => log.message).forEach(console.log);
}
},
});
console.log(result.data);
console.log(result.requestId);2. Authentication#
The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.
API Key#
import { fal } from "@fal-ai/client";
fal.config({
credentials: "YOUR_FAL_KEY"
});Protect your API Key
When running code on the client-side (e.g. in a browser, mobile app or GUI applications), make sure to not expose your FAL_KEY. Instead, use a server-side proxy to make requests to the API. For more information, check out our server-side integration guide.
3. Queue#
Submit a request#
The client API provides a convenient way to submit requests to the model.
import { fal } from "@fal-ai/client";
const { request_id } = await fal.queue.submit("fal-ai/wan-vace-14b/depth", {
input: {
prompt: "A confident woman strides toward the camera down a sun-drenched, empty street. Her vibrant summer dress, a flowing emerald green with delicate white floral embroidery, billows slightly in the gentle breeze. She carries a stylish, woven straw bag, its natural tan contrasting beautifully with the dress. The dress's fabric shimmers subtly, catching the light. The white embroidery is intricate, each tiny flower meticulously detailed. Her expression is focused, yet relaxed, radiating self-assuredness. Her auburn hair, partially pulled back in a loose braid, catches the sunlight, creating warm highlights. The street itself is paved with warm, grey cobblestones, reflecting the bright sun. The mood is optimistic and serene, emphasizing the woman's independence and carefree spirit. High resolution 4k",
video_url: "https://storage.googleapis.com/falserverless/example_inputs/wan-vace-depth-video.mp4"
},
webhookUrl: "https://optional.webhook.url/for/results",
});Fetch request status#
You can fetch the status of a request to check if it is completed or still in progress.
import { fal } from "@fal-ai/client";
const status = await fal.queue.status("fal-ai/wan-vace-14b/depth", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
logs: true,
});Get the result#
Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.
import { fal } from "@fal-ai/client";
const result = await fal.queue.result("fal-ai/wan-vace-14b/depth", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);4. Files#
Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.
Data URI (base64)#
You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.
Hosted files (URL)#
You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.
Uploading files#
We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.
import { fal } from "@fal-ai/client";
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);Auto uploads
The client will auto-upload the file for you if you pass a binary object (e.g. File, Data).
Read more about file handling in our file upload guide.
5. Schema#
Input#
prompt string* requiredThe text prompt to guide video generation.
negative_prompt stringNegative prompt for video generation. Default value: "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards"
match_input_num_frames booleanIf true, the number of frames in the generated video will match the number of frames in the input video. If false, the number of frames will be determined by the num_frames parameter.
num_frames integerNumber of frames to generate. Must be between 81 to 241 (inclusive). Default value: 81
match_input_frames_per_second booleanIf true, the frames per second of the generated video will match the input video. If false, the frames per second will be determined by the frames_per_second parameter.
frames_per_second integerFrames per second of the generated video. Must be between 5 to 30. Ignored if match_input_frames_per_second is true. Default value: 16
seed integerRandom seed for reproducibility. If None, a random seed is chosen.
resolution ResolutionEnumResolution of the generated video. Default value: "auto"
Possible enum values: auto, 240p, 360p, 480p, 580p, 720p
aspect_ratio AspectRatioEnumAspect ratio of the generated video. Default value: "auto"
Possible enum values: auto, 16:9, 1:1, 9:16
num_inference_steps integerNumber of inference steps for sampling. Higher values give better quality but take longer. Default value: 30
guidance_scale floatGuidance scale for classifier-free guidance. Higher values encourage the model to generate images closely related to the text prompt. Default value: 5
sampler SamplerEnumSampler to use for video generation. Default value: "unipc"
Possible enum values: unipc, dpm++, euler
shift floatShift parameter for video generation. Default value: 5
video_url string* requiredURL to the source video file. Required for depth task.
URLs to source reference image. If provided, the model will use this image as reference.
first_frame_url stringURL to the first frame of the video. If provided, the model will use this frame as a reference.
last_frame_url stringURL to the last frame of the video. If provided, the model will use this frame as a reference.
enable_safety_checker booleanIf set to true, the safety checker will be enabled.
enable_prompt_expansion booleanWhether to enable prompt expansion.
preprocess booleanWhether to preprocess the input video.
acceleration EnumAcceleration to use for inference. Options are 'none' or 'regular'. Accelerated inference will very slightly affect output, but will be significantly faster. Default value: regular
Possible enum values: none, low, regular
video_quality VideoQualityEnumThe quality of the generated video. Default value: "high"
Possible enum values: low, medium, high, maximum
video_write_mode VideoWriteModeEnumThe write mode of the generated video. Default value: "balanced"
Possible enum values: fast, balanced, small
num_interpolated_frames integerNumber of frames to interpolate between the original frames. A value of 0 means no interpolation.
temporal_downsample_factor integerTemporal downsample factor for the video. This is an integer value that determines how many frames to skip in the video. A value of 0 means no downsampling. For each downsample factor, one upsample factor will automatically be applied.
enable_auto_downsample booleanIf true, the model will automatically temporally downsample the video to an appropriate frame length for the model, then will interpolate it back to the original frame length.
auto_downsample_min_fps floatThe minimum frames per second to downsample the video to. This is used to help determine the auto downsample factor to try and find the lowest detail-preserving downsample factor. The default value is appropriate for most videos, if you are using a video with very fast motion, you may need to increase this value. If your video has a very low amount of motion, you could decrease this value to allow for higher downsampling and thus longer sequences. Default value: 15
interpolator_model InterpolatorModelEnumThe model to use for frame interpolation. Options are 'rife' or 'film'. Default value: "film"
Possible enum values: rife, film
sync_mode booleanIf True, the media will be returned as a data URI and the output data won't be available in the request history.
transparency_mode TransparencyModeEnumThe transparency mode to apply to the first and last frames. This controls how the transparent areas of the first and last frames are filled. Default value: "content_aware"
Possible enum values: content_aware, white, black
{
"prompt": "A confident woman strides toward the camera down a sun-drenched, empty street. Her vibrant summer dress, a flowing emerald green with delicate white floral embroidery, billows slightly in the gentle breeze. She carries a stylish, woven straw bag, its natural tan contrasting beautifully with the dress. The dress's fabric shimmers subtly, catching the light. The white embroidery is intricate, each tiny flower meticulously detailed. Her expression is focused, yet relaxed, radiating self-assuredness. Her auburn hair, partially pulled back in a loose braid, catches the sunlight, creating warm highlights. The street itself is paved with warm, grey cobblestones, reflecting the bright sun. The mood is optimistic and serene, emphasizing the woman's independence and carefree spirit. High resolution 4k",
"negative_prompt": "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards",
"match_input_num_frames": false,
"num_frames": 81,
"match_input_frames_per_second": false,
"frames_per_second": 16,
"resolution": "auto",
"aspect_ratio": "auto",
"num_inference_steps": 30,
"guidance_scale": 5,
"sampler": "unipc",
"shift": 5,
"video_url": "https://storage.googleapis.com/falserverless/example_inputs/wan-vace-depth-video.mp4",
"enable_safety_checker": true,
"enable_prompt_expansion": false,
"preprocess": false,
"acceleration": "regular",
"video_quality": "high",
"video_write_mode": "balanced",
"num_interpolated_frames": 0,
"temporal_downsample_factor": 0,
"enable_auto_downsample": false,
"auto_downsample_min_fps": 15,
"interpolator_model": "film",
"sync_mode": false,
"transparency_mode": "content_aware"
}Output#
The generated depth video file.
prompt string* requiredThe prompt used for generation.
seed integer* requiredThe seed used for generation.
{
"video": {
"url": "https://storage.googleapis.com/falserverless/example_outputs/wan-vace-depth-output.mp4"
},
"prompt": ""
}Other types#
WanVACERequest#
prompt string* requiredThe text prompt to guide video generation.
negative_prompt stringNegative prompt for video generation. Default value: "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards"
match_input_num_frames booleanIf true, the number of frames in the generated video will match the number of frames in the input video. If false, the number of frames will be determined by the num_frames parameter.
num_frames integerNumber of frames to generate. Must be between 81 to 241 (inclusive). Default value: 81
match_input_frames_per_second booleanIf true, the frames per second of the generated video will match the input video. If false, the frames per second will be determined by the frames_per_second parameter.
frames_per_second integerFrames per second of the generated video. Must be between 5 to 30. Ignored if match_input_frames_per_second is true. Default value: 16
task TaskEnumTask type for the model. Default value: "depth"
Possible enum values: depth, pose, inpainting, outpainting, reframe
seed integerRandom seed for reproducibility. If None, a random seed is chosen.
resolution ResolutionEnumResolution of the generated video. Default value: "auto"
Possible enum values: auto, 240p, 360p, 480p, 580p, 720p
aspect_ratio AspectRatioEnumAspect ratio of the generated video. Default value: "auto"
Possible enum values: auto, 16:9, 1:1, 9:16
num_inference_steps integerNumber of inference steps for sampling. Higher values give better quality but take longer. Default value: 30
guidance_scale floatGuidance scale for classifier-free guidance. Higher values encourage the model to generate images closely related to the text prompt. Default value: 5
sampler SamplerEnumSampler to use for video generation. Default value: "unipc"
Possible enum values: unipc, dpm++, euler
shift floatShift parameter for video generation. Default value: 5
video_url stringURL to the source video file. If provided, the model will use this video as a reference.
mask_video_url stringURL to the source mask file. If provided, the model will use this mask as a reference.
mask_image_url stringURL to the guiding mask file. If provided, the model will use this mask as a reference to create masked video. If provided mask video url will be ignored.
URLs to source reference image. If provided, the model will use this image as reference.
first_frame_url stringURL to the first frame of the video. If provided, the model will use this frame as a reference.
last_frame_url stringURL to the last frame of the video. If provided, the model will use this frame as a reference.
enable_safety_checker booleanIf set to true, the safety checker will be enabled.
enable_prompt_expansion booleanWhether to enable prompt expansion.
preprocess booleanWhether to preprocess the input video.
acceleration EnumAcceleration to use for inference. Options are 'none' or 'regular'. Accelerated inference will very slightly affect output, but will be significantly faster. Default value: regular
Possible enum values: none, low, regular
video_quality VideoQualityEnumThe quality of the generated video. Default value: "high"
Possible enum values: low, medium, high, maximum
video_write_mode VideoWriteModeEnumThe write mode of the generated video. Default value: "balanced"
Possible enum values: fast, balanced, small
num_interpolated_frames integerNumber of frames to interpolate between the original frames. A value of 0 means no interpolation.
temporal_downsample_factor integerTemporal downsample factor for the video. This is an integer value that determines how many frames to skip in the video. A value of 0 means no downsampling. For each downsample factor, one upsample factor will automatically be applied.
enable_auto_downsample booleanIf true, the model will automatically temporally downsample the video to an appropriate frame length for the model, then will interpolate it back to the original frame length.
auto_downsample_min_fps floatThe minimum frames per second to downsample the video to. This is used to help determine the auto downsample factor to try and find the lowest detail-preserving downsample factor. The default value is appropriate for most videos, if you are using a video with very fast motion, you may need to increase this value. If your video has a very low amount of motion, you could decrease this value to allow for higher downsampling and thus longer sequences. Default value: 15
interpolator_model InterpolatorModelEnumThe model to use for frame interpolation. Options are 'rife' or 'film'. Default value: "film"
Possible enum values: rife, film
sync_mode booleanIf True, the media will be returned as a data URI and the output data won't be available in the request history.
transparency_mode TransparencyModeEnumThe transparency mode to apply to the first and last frames. This controls how the transparent areas of the first and last frames are filled. Default value: "content_aware"
Possible enum values: content_aware, white, black
WanVACEReframeRequest#
prompt stringThe text prompt to guide video generation. Optional for reframing. Default value: ""
negative_prompt stringNegative prompt for video generation. Default value: "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards"
match_input_num_frames booleanIf true, the number of frames in the generated video will match the number of frames in the input video. If false, the number of frames will be determined by the num_frames parameter. Default value: true
num_frames integerNumber of frames to generate. Must be between 81 to 241 (inclusive). Default value: 81
match_input_frames_per_second booleanIf true, the frames per second of the generated video will match the input video. If false, the frames per second will be determined by the frames_per_second parameter. Default value: true
frames_per_second integerFrames per second of the generated video. Must be between 5 to 30. Ignored if match_input_frames_per_second is true. Default value: 16
seed integerRandom seed for reproducibility. If None, a random seed is chosen.
resolution ResolutionEnumResolution of the generated video. Default value: "auto"
Possible enum values: auto, 240p, 360p, 480p, 580p, 720p
aspect_ratio AspectRatioEnumAspect ratio of the generated video. Default value: "auto"
Possible enum values: auto, 16:9, 1:1, 9:16
num_inference_steps integerNumber of inference steps for sampling. Higher values give better quality but take longer. Default value: 30
guidance_scale floatGuidance scale for classifier-free guidance. Higher values encourage the model to generate images closely related to the text prompt. Default value: 5
sampler SamplerEnumSampler to use for video generation. Default value: "unipc"
Possible enum values: unipc, dpm++, euler
shift floatShift parameter for video generation. Default value: 5
video_url string* requiredURL to the source video file. This video will be used as a reference for the reframe task.
first_frame_url stringURL to the first frame of the video. If provided, the model will use this frame as a reference.
last_frame_url stringURL to the last frame of the video. If provided, the model will use this frame as a reference.
enable_safety_checker booleanIf set to true, the safety checker will be enabled.
enable_prompt_expansion booleanWhether to enable prompt expansion.
acceleration EnumAcceleration to use for inference. Options are 'none' or 'regular'. Accelerated inference will very slightly affect output, but will be significantly faster. Default value: regular
Possible enum values: none, low, regular
video_quality VideoQualityEnumThe quality of the generated video. Default value: "high"
Possible enum values: low, medium, high, maximum
video_write_mode VideoWriteModeEnumThe write mode of the generated video. Default value: "balanced"
Possible enum values: fast, balanced, small
num_interpolated_frames integerNumber of frames to interpolate between the original frames. A value of 0 means no interpolation.
temporal_downsample_factor integerTemporal downsample factor for the video. This is an integer value that determines how many frames to skip in the video. A value of 0 means no downsampling. For each downsample factor, one upsample factor will automatically be applied.
enable_auto_downsample booleanIf true, the model will automatically temporally downsample the video to an appropriate frame length for the model, then will interpolate it back to the original frame length.
auto_downsample_min_fps floatThe minimum frames per second to downsample the video to. This is used to help determine the auto downsample factor to try and find the lowest detail-preserving downsample factor. The default value is appropriate for most videos, if you are using a video with very fast motion, you may need to increase this value. If your video has a very low amount of motion, you could decrease this value to allow for higher downsampling and thus longer sequences. Default value: 15
interpolator_model InterpolatorModelEnumThe model to use for frame interpolation. Options are 'rife' or 'film'. Default value: "film"
Possible enum values: rife, film
sync_mode booleanIf True, the media will be returned as a data URI and the output data won't be available in the request history.
transparency_mode TransparencyModeEnumThe transparency mode to apply to the first and last frames. This controls how the transparent areas of the first and last frames are filled. Default value: "content_aware"
Possible enum values: content_aware, white, black
zoom_factor floatZoom factor for the video. When this value is greater than 0, the video will be zoomed in by this factor (in relation to the canvas size,) cutting off the edges of the video. A value of 0 means no zoom.
trim_borders booleanWhether to trim borders from the video. Default value: true
WanVACEImageToVideoRequest#
prompt string* requiredThe prompt to guide the video generation.
first_frame_url string* requiredURL to the first frame of the video.
last_frame_url stringURL to the last frame of the video.
URLs to reference images.
negative_prompt stringNegative prompt for video generation. Default value: "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards"
num_frames integerNumber of frames to generate. Must be between 81 to 241 (inclusive). Default value: 81
frames_per_second integerFrames per second of the generated video. Must be between 5 to 30. Ignored if match_input_frames_per_second is true. Default value: 16
seed integerRandom seed for reproducibility. If None, a random seed is chosen.
resolution ResolutionEnumResolution of the generated video. Default value: "auto"
Possible enum values: auto, 240p, 360p, 480p, 580p, 720p
aspect_ratio AspectRatioEnumAspect ratio of the generated video. Default value: "auto"
Possible enum values: auto, 16:9, 1:1, 9:16
num_inference_steps integerNumber of inference steps for sampling. Higher values give better quality but take longer. Default value: 30
guidance_scale floatGuidance scale for classifier-free guidance. Higher values encourage the model to generate images closely related to the text prompt. Default value: 5
sampler SamplerEnumSampler to use for video generation. Default value: "unipc"
Possible enum values: unipc, dpm++, euler
shift floatShift parameter for video generation. Default value: 5
enable_safety_checker booleanIf set to true, the safety checker will be enabled.
enable_prompt_expansion booleanWhether to enable prompt expansion.
acceleration EnumAcceleration to use for inference. Options are 'none' or 'regular'. Accelerated inference will very slightly affect output, but will be significantly faster. Default value: regular
Possible enum values: none, low, regular
video_quality VideoQualityEnumThe quality of the generated video. Default value: "high"
Possible enum values: low, medium, high, maximum
video_write_mode VideoWriteModeEnumThe write mode of the generated video. Default value: "balanced"
Possible enum values: fast, balanced, small
num_interpolated_frames integerNumber of frames to interpolate between the original frames. A value of 0 means no interpolation.
temporal_downsample_factor integerTemporal downsample factor for the video. This is an integer value that determines how many frames to skip in the video. A value of 0 means no downsampling. For each downsample factor, one upsample factor will automatically be applied.
enable_auto_downsample booleanIf true, the model will automatically temporally downsample the video to an appropriate frame length for the model, then will interpolate it back to the original frame length.
auto_downsample_min_fps floatThe minimum frames per second to downsample the video to. This is used to help determine the auto downsample factor to try and find the lowest detail-preserving downsample factor. The default value is appropriate for most videos, if you are using a video with very fast motion, you may need to increase this value. If your video has a very low amount of motion, you could decrease this value to allow for higher downsampling and thus longer sequences. Default value: 15
interpolator_model InterpolatorModelEnumThe model to use for frame interpolation. Options are 'rife' or 'film'. Default value: "film"
Possible enum values: rife, film
sync_mode booleanIf True, the media will be returned as a data URI and the output data won't be available in the request history.
transparency_mode TransparencyModeEnumThe transparency mode to apply to the first and last frames. This controls how the transparent areas of the first and last frames are filled. Default value: "content_aware"
Possible enum values: content_aware, white, black
WanVACEPoseRequest#
prompt string* requiredThe text prompt to guide video generation. For pose task, the prompt should describe the desired pose and action of the subject in the video.
negative_prompt stringNegative prompt for video generation. Default value: "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards"
match_input_num_frames booleanIf true, the number of frames in the generated video will match the number of frames in the input video. If false, the number of frames will be determined by the num_frames parameter.
num_frames integerNumber of frames to generate. Must be between 81 to 241 (inclusive). Default value: 81
match_input_frames_per_second booleanIf true, the frames per second of the generated video will match the input video. If false, the frames per second will be determined by the frames_per_second parameter.
frames_per_second integerFrames per second of the generated video. Must be between 5 to 30. Ignored if match_input_frames_per_second is true. Default value: 16
seed integerRandom seed for reproducibility. If None, a random seed is chosen.
resolution ResolutionEnumResolution of the generated video. Default value: "auto"
Possible enum values: auto, 240p, 360p, 480p, 580p, 720p
aspect_ratio AspectRatioEnumAspect ratio of the generated video. Default value: "auto"
Possible enum values: auto, 16:9, 1:1, 9:16
num_inference_steps integerNumber of inference steps for sampling. Higher values give better quality but take longer. Default value: 30
guidance_scale floatGuidance scale for classifier-free guidance. Higher values encourage the model to generate images closely related to the text prompt. Default value: 5
sampler SamplerEnumSampler to use for video generation. Default value: "unipc"
Possible enum values: unipc, dpm++, euler
shift floatShift parameter for video generation. Default value: 5
video_url string* requiredURL to the source video file. Required for pose task.
URLs to source reference image. If provided, the model will use this image as reference.
first_frame_url stringURL to the first frame of the video. If provided, the model will use this frame as a reference.
last_frame_url stringURL to the last frame of the video. If provided, the model will use this frame as a reference.
enable_safety_checker booleanIf set to true, the safety checker will be enabled.
enable_prompt_expansion booleanWhether to enable prompt expansion.
preprocess booleanWhether to preprocess the input video.
acceleration EnumAcceleration to use for inference. Options are 'none' or 'regular'. Accelerated inference will very slightly affect output, but will be significantly faster. Default value: regular
Possible enum values: none, low, regular
video_quality VideoQualityEnumThe quality of the generated video. Default value: "high"
Possible enum values: low, medium, high, maximum
video_write_mode VideoWriteModeEnumThe write mode of the generated video. Default value: "balanced"
Possible enum values: fast, balanced, small
num_interpolated_frames integerNumber of frames to interpolate between the original frames. A value of 0 means no interpolation.
temporal_downsample_factor integerTemporal downsample factor for the video. This is an integer value that determines how many frames to skip in the video. A value of 0 means no downsampling. For each downsample factor, one upsample factor will automatically be applied.
enable_auto_downsample booleanIf true, the model will automatically temporally downsample the video to an appropriate frame length for the model, then will interpolate it back to the original frame length.
auto_downsample_min_fps floatThe minimum frames per second to downsample the video to. This is used to help determine the auto downsample factor to try and find the lowest detail-preserving downsample factor. The default value is appropriate for most videos, if you are using a video with very fast motion, you may need to increase this value. If your video has a very low amount of motion, you could decrease this value to allow for higher downsampling and thus longer sequences. Default value: 15
interpolator_model InterpolatorModelEnumThe model to use for frame interpolation. Options are 'rife' or 'film'. Default value: "film"
Possible enum values: rife, film
sync_mode booleanIf True, the media will be returned as a data URI and the output data won't be available in the request history.
transparency_mode TransparencyModeEnumThe transparency mode to apply to the first and last frames. This controls how the transparent areas of the first and last frames are filled. Default value: "content_aware"
Possible enum values: content_aware, white, black
WanVACEReframeResponse#
The generated reframe video file.
prompt string* requiredThe prompt used for generation.
seed integer* requiredThe seed used for generation.
WanVACETextToVideoRequest#
prompt string* requiredThe prompt to guide the video generation.
negative_prompt stringNegative prompt for video generation. Default value: "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards"
num_frames integerNumber of frames to generate. Must be between 81 to 241 (inclusive). Default value: 81
frames_per_second integerFrames per second of the generated video. Must be between 5 to 30. Ignored if match_input_frames_per_second is true. Default value: 16
seed integerRandom seed for reproducibility. If None, a random seed is chosen.
resolution ResolutionEnumResolution of the generated video. Default value: "720p"
Possible enum values: 240p, 360p, 480p, 580p, 720p
aspect_ratio AspectRatioEnumAspect ratio of the generated video. Default value: "16:9"
Possible enum values: 16:9, 1:1, 9:16
num_inference_steps integerNumber of inference steps for sampling. Higher values give better quality but take longer. Default value: 30
guidance_scale floatGuidance scale for classifier-free guidance. Higher values encourage the model to generate images closely related to the text prompt. Default value: 5
sampler SamplerEnumSampler to use for video generation. Default value: "unipc"
Possible enum values: unipc, dpm++, euler
shift floatShift parameter for video generation. Default value: 5
URLs to reference images.
enable_safety_checker booleanIf set to true, the safety checker will be enabled.
enable_prompt_expansion booleanWhether to enable prompt expansion.
acceleration EnumAcceleration to use for inference. Options are 'none' or 'regular'. Accelerated inference will very slightly affect output, but will be significantly faster. Default value: regular
Possible enum values: none, low, regular
video_quality VideoQualityEnumThe quality of the generated video. Default value: "high"
Possible enum values: low, medium, high, maximum
video_write_mode VideoWriteModeEnumThe write mode of the generated video. Default value: "balanced"
Possible enum values: fast, balanced, small
num_interpolated_frames integerNumber of frames to interpolate between the original frames. A value of 0 means no interpolation.
temporal_downsample_factor integerTemporal downsample factor for the video. This is an integer value that determines how many frames to skip in the video. A value of 0 means no downsampling. For each downsample factor, one upsample factor will automatically be applied.
enable_auto_downsample booleanIf true, the model will automatically temporally downsample the video to an appropriate frame length for the model, then will interpolate it back to the original frame length.
auto_downsample_min_fps floatThe minimum frames per second to downsample the video to. This is used to help determine the auto downsample factor to try and find the lowest detail-preserving downsample factor. The default value is appropriate for most videos, if you are using a video with very fast motion, you may need to increase this value. If your video has a very low amount of motion, you could decrease this value to allow for higher downsampling and thus longer sequences. Default value: 15
interpolator_model InterpolatorModelEnumThe model to use for frame interpolation. Options are 'rife' or 'film'. Default value: "film"
Possible enum values: rife, film
sync_mode booleanIf True, the media will be returned as a data URI and the output data won't be available in the request history.
transparency_mode TransparencyModeEnumThe transparency mode to apply to the first and last frames. This controls how the transparent areas of the first and last frames are filled. Default value: "content_aware"
Possible enum values: content_aware, white, black
VideoFile#
url string* requiredThe URL where the file can be downloaded from.
content_type stringThe mime type of the file.
file_name stringThe name of the file. It will be auto-generated if not provided.
file_size integerThe size of the file in bytes.
width integerThe width of the video
height integerThe height of the video
fps floatThe FPS of the video
duration floatThe duration of the video
num_frames integerThe number of frames in the video
WanVACEOutpaintingResponse#
The generated outpainting video file.
prompt string* requiredThe prompt used for generation.
seed integer* requiredThe seed used for generation.
WanVACEImageToVideoResponse#
The generated image to video file.
prompt string* requiredThe prompt used for generation.
seed integer* requiredThe seed used for generation.
WanVACEResponse#
The generated video file.
prompt string* requiredThe prompt used for generation.
seed integer* requiredThe seed used for generation.
WanVACEInpaintingResponse#
The generated inpainting video file.
prompt string* requiredThe prompt used for generation.
seed integer* requiredThe seed used for generation.
WanVACEInpaintingRequest#
prompt string* requiredThe text prompt to guide video generation.
negative_prompt stringNegative prompt for video generation. Default value: "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards"
match_input_num_frames booleanIf true, the number of frames in the generated video will match the number of frames in the input video. If false, the number of frames will be determined by the num_frames parameter.
num_frames integerNumber of frames to generate. Must be between 81 to 241 (inclusive). Default value: 81
match_input_frames_per_second booleanIf true, the frames per second of the generated video will match the input video. If false, the frames per second will be determined by the frames_per_second parameter.
frames_per_second integerFrames per second of the generated video. Must be between 5 to 30. Ignored if match_input_frames_per_second is true. Default value: 16
seed integerRandom seed for reproducibility. If None, a random seed is chosen.
resolution ResolutionEnumResolution of the generated video. Default value: "auto"
Possible enum values: auto, 240p, 360p, 480p, 580p, 720p
aspect_ratio AspectRatioEnumAspect ratio of the generated video. Default value: "auto"
Possible enum values: auto, 16:9, 1:1, 9:16
num_inference_steps integerNumber of inference steps for sampling. Higher values give better quality but take longer. Default value: 30
guidance_scale floatGuidance scale for classifier-free guidance. Higher values encourage the model to generate images closely related to the text prompt. Default value: 5
sampler SamplerEnumSampler to use for video generation. Default value: "unipc"
Possible enum values: unipc, dpm++, euler
shift floatShift parameter for video generation. Default value: 5
video_url string* requiredURL to the source video file. Required for inpainting.
mask_video_url string* requiredURL to the source mask file. Required for inpainting.
mask_image_url stringURL to the guiding mask file. If provided, the model will use this mask as a reference to create masked video using salient mask tracking. Will be ignored if mask_video_url is provided.
Urls to source reference image. If provided, the model will use this image as reference.
first_frame_url stringURL to the first frame of the video. If provided, the model will use this frame as a reference.
last_frame_url stringURL to the last frame of the video. If provided, the model will use this frame as a reference.
enable_safety_checker booleanIf set to true, the safety checker will be enabled.
enable_prompt_expansion booleanWhether to enable prompt expansion.
preprocess booleanWhether to preprocess the input video.
acceleration EnumAcceleration to use for inference. Options are 'none' or 'regular'. Accelerated inference will very slightly affect output, but will be significantly faster. Default value: regular
Possible enum values: none, low, regular
video_quality VideoQualityEnumThe quality of the generated video. Default value: "high"
Possible enum values: low, medium, high, maximum
video_write_mode VideoWriteModeEnumThe write mode of the generated video. Default value: "balanced"
Possible enum values: fast, balanced, small
num_interpolated_frames integerNumber of frames to interpolate between the original frames. A value of 0 means no interpolation.
temporal_downsample_factor integerTemporal downsample factor for the video. This is an integer value that determines how many frames to skip in the video. A value of 0 means no downsampling. For each downsample factor, one upsample factor will automatically be applied.
enable_auto_downsample booleanIf true, the model will automatically temporally downsample the video to an appropriate frame length for the model, then will interpolate it back to the original frame length.
auto_downsample_min_fps floatThe minimum frames per second to downsample the video to. This is used to help determine the auto downsample factor to try and find the lowest detail-preserving downsample factor. The default value is appropriate for most videos, if you are using a video with very fast motion, you may need to increase this value. If your video has a very low amount of motion, you could decrease this value to allow for higher downsampling and thus longer sequences. Default value: 15
interpolator_model InterpolatorModelEnumThe model to use for frame interpolation. Options are 'rife' or 'film'. Default value: "film"
Possible enum values: rife, film
sync_mode booleanIf True, the media will be returned as a data URI and the output data won't be available in the request history.
transparency_mode TransparencyModeEnumThe transparency mode to apply to the first and last frames. This controls how the transparent areas of the first and last frames are filled. Default value: "content_aware"
Possible enum values: content_aware, white, black
WanVACETextToVideoResponse#
The generated text to video file.
prompt string* requiredThe prompt used for generation.
seed integer* requiredThe seed used for generation.
WanVACEPoseResponse#
The generated pose video file.
prompt string* requiredThe prompt used for generation.
seed integer* requiredThe seed used for generation.
WanVACEOutpaintingRequest#
prompt string* requiredThe text prompt to guide video generation.
negative_prompt stringNegative prompt for video generation. Default value: "letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards"
match_input_num_frames booleanIf true, the number of frames in the generated video will match the number of frames in the input video. If false, the number of frames will be determined by the num_frames parameter.
num_frames integerNumber of frames to generate. Must be between 81 to 241 (inclusive). Default value: 81
match_input_frames_per_second booleanIf true, the frames per second of the generated video will match the input video. If false, the frames per second will be determined by the frames_per_second parameter.
frames_per_second integerFrames per second of the generated video. Must be between 5 to 30. Ignored if match_input_frames_per_second is true. Default value: 16
seed integerRandom seed for reproducibility. If None, a random seed is chosen.
resolution ResolutionEnumResolution of the generated video. Default value: "auto"
Possible enum values: auto, 240p, 360p, 480p, 580p, 720p
aspect_ratio AspectRatioEnumAspect ratio of the generated video. Default value: "auto"
Possible enum values: auto, 16:9, 1:1, 9:16
num_inference_steps integerNumber of inference steps for sampling. Higher values give better quality but take longer. Default value: 30
guidance_scale floatGuidance scale for classifier-free guidance. Higher values encourage the model to generate images closely related to the text prompt. Default value: 5
sampler SamplerEnumSampler to use for video generation. Default value: "unipc"
Possible enum values: unipc, dpm++, euler
shift floatShift parameter for video generation. Default value: 5
video_url string* requiredURL to the source video file. Required for outpainting.
URLs to source reference image. If provided, the model will use this image as reference.
first_frame_url stringURL to the first frame of the video. If provided, the model will use this frame as a reference.
last_frame_url stringURL to the last frame of the video. If provided, the model will use this frame as a reference.
enable_safety_checker booleanIf set to true, the safety checker will be enabled.
enable_prompt_expansion booleanWhether to enable prompt expansion.
acceleration EnumAcceleration to use for inference. Options are 'none' or 'regular'. Accelerated inference will very slightly affect output, but will be significantly faster. Default value: regular
Possible enum values: none, low, regular
video_quality VideoQualityEnumThe quality of the generated video. Default value: "high"
Possible enum values: low, medium, high, maximum
video_write_mode VideoWriteModeEnumThe write mode of the generated video. Default value: "balanced"
Possible enum values: fast, balanced, small
num_interpolated_frames integerNumber of frames to interpolate between the original frames. A value of 0 means no interpolation.
temporal_downsample_factor integerTemporal downsample factor for the video. This is an integer value that determines how many frames to skip in the video. A value of 0 means no downsampling. For each downsample factor, one upsample factor will automatically be applied.
enable_auto_downsample booleanIf true, the model will automatically temporally downsample the video to an appropriate frame length for the model, then will interpolate it back to the original frame length.
auto_downsample_min_fps floatThe minimum frames per second to downsample the video to. This is used to help determine the auto downsample factor to try and find the lowest detail-preserving downsample factor. The default value is appropriate for most videos, if you are using a video with very fast motion, you may need to increase this value. If your video has a very low amount of motion, you could decrease this value to allow for higher downsampling and thus longer sequences. Default value: 15
interpolator_model InterpolatorModelEnumThe model to use for frame interpolation. Options are 'rife' or 'film'. Default value: "film"
Possible enum values: rife, film
sync_mode booleanIf True, the media will be returned as a data URI and the output data won't be available in the request history.
transparency_mode TransparencyModeEnumThe transparency mode to apply to the first and last frames. This controls how the transparent areas of the first and last frames are filled. Default value: "content_aware"
Possible enum values: content_aware, white, black
expand_left booleanWhether to expand the video to the left.
expand_right booleanWhether to expand the video to the right.
expand_top booleanWhether to expand the video to the top.
expand_bottom booleanWhether to expand the video to the bottom.
expand_ratio floatAmount of expansion. This is a float value between 0 and 1, where 0.25 adds 25% to the original video size on the specified sides. Default value: 0.25