Migrate to @fal-ai/client
The @fal-ai/serverless-client
package has been deprecated in favor of @fal-ai/client
. Please check the migration guide for more information.
fal-ai/live-portrait
Predict Pose
The client provides a convenient way to interact with the model API.
npm install --save @fal-ai/client
@fal-ai/client
The @fal-ai/serverless-client
package has been deprecated in favor of @fal-ai/client
. Please check the migration guide for more information.
Set FAL_KEY
as an environment variable in your runtime.
export FAL_KEY="YOUR_API_KEY"
The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.
import { fal } from "@fal-ai/client";
const result = await fal.subscribe("fal-ai/live-portrait", {
input: {
video_url: "https://storage.googleapis.com/falserverless/model_tests/live-portrait/liveportrait-example.mp4",
image_url: "https://storage.googleapis.com/falserverless/model_tests/live-portrait/XKEmk3mAzGHUjK3qqH-UL.jpeg"
},
logs: true,
onQueueUpdate: (update) => {
if (update.status === "IN_PROGRESS") {
update.logs.map((log) => log.message).forEach(console.log);
}
},
});
console.log(result.data);
console.log(result.requestId);
The API uses an API Key for authentication. It is recommended you set the FAL_KEY
environment variable in your runtime when possible.
import { fal } from "@fal-ai/client";
fal.config({
credentials: "YOUR_FAL_KEY"
});
When running code on the client-side (e.g. in a browser, mobile app or GUI applications), make sure to not expose your FAL_KEY
. Instead, use a server-side proxy to make requests to the API. For more information, check out our server-side integration guide.
The client API provides a convenient way to submit requests to the model.
import { fal } from "@fal-ai/client";
const { request_id } = await fal.queue.submit("fal-ai/live-portrait", {
input: {
video_url: "https://storage.googleapis.com/falserverless/model_tests/live-portrait/liveportrait-example.mp4",
image_url: "https://storage.googleapis.com/falserverless/model_tests/live-portrait/XKEmk3mAzGHUjK3qqH-UL.jpeg"
},
webhookUrl: "https://optional.webhook.url/for/results",
});
You can fetch the status of a request to check if it is completed or still in progress.
import { fal } from "@fal-ai/client";
const status = await fal.queue.status("fal-ai/live-portrait", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
logs: true,
});
Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.
import { fal } from "@fal-ai/client";
const result = await fal.queue.result("fal-ai/live-portrait", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);
Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.
You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.
You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.
We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.
import { fal } from "@fal-ai/client";
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);
The client will auto-upload the file for you if you pass a binary object (e.g. File
, Data
).
Read more about file handling in our file upload guide.
video_url
string
* requiredURL of the video to drive the lip syncing.
image_url
string
* requiredURL of the image to be animated
blink
float
Amount to blink the eyes
eyebrow
float
Amount to raise or lower eyebrows
wink
float
Amount to wink
pupil_x
float
Amount to move pupils horizontally
pupil_y
float
Amount to move pupils vertically
aaa
float
Amount to open mouth in 'aaa' shape
eee
float
Amount to shape mouth in 'eee' position
woo
float
Amount to shape mouth in 'woo' position
smile
float
Amount to smile
flag_lip_zero
boolean
Whether to set the lip to closed state before animation. Only takes effect when flag_eye_retargeting and flag_lip_retargeting are False. Default value: true
rotate_pitch
float
Amount to rotate the face in pitch
rotate_yaw
float
Amount to rotate the face in yaw
rotate_roll
float
Amount to rotate the face in roll
flag_eye_retargeting
boolean
Whether to enable eye retargeting.
flag_lip_retargeting
boolean
Whether to enable lip retargeting.
flag_stitching
boolean
Whether to enable stitching. Recommended to set to True. Default value: true
flag_relative
boolean
Whether to use relative motion. Default value: true
flag_pasteback
boolean
Whether to paste-back/stitch the animated face cropping from the face-cropping space to the original image space. Default value: true
flag_do_crop
boolean
Whether to crop the source portrait to the face-cropping space. Default value: true
flag_do_rot
boolean
Whether to conduct the rotation when flag_do_crop is True. Default value: true
dsize
integer
Size of the output image. Default value: 512
scale
float
Scaling factor for the face crop. Default value: 2.3
vx_ratio
float
Horizontal offset ratio for face crop.
vy_ratio
float
Vertical offset ratio for face crop. Positive values move up, negative values move down. Default value: -0.125
batch_size
integer
Batch size for the model. The larger the batch size, the faster the model will run, but the more memory it will consume. Default value: 32
enable_safety_checker
boolean
Whether to enable the safety checker. If enabled, the model will check if the input image contains a face before processing it. The safety checker will process the input image
{
"video_url": "https://storage.googleapis.com/falserverless/model_tests/live-portrait/liveportrait-example.mp4",
"image_url": "https://storage.googleapis.com/falserverless/model_tests/live-portrait/XKEmk3mAzGHUjK3qqH-UL.jpeg",
"flag_lip_zero": true,
"flag_stitching": true,
"flag_relative": true,
"flag_pasteback": true,
"flag_do_crop": true,
"flag_do_rot": true,
"dsize": 512,
"scale": 2.3,
"vy_ratio": -0.125,
"batch_size": 32
}
The generated video file.
{
"video": {
"url": "",
"content_type": "image/png",
"file_name": "z9RV14K95DvU.png",
"file_size": 4404019
}
}
url
string
* requiredThe URL where the file can be downloaded from.
content_type
string
The mime type of the file.
file_name
string
The name of the file. It will be auto-generated if not provided.
file_size
integer
The size of the file in bytes.
file_data
string
File data
url
string
* requiredThe URL where the file can be downloaded from.
content_type
string
The mime type of the file.
file_name
string
The name of the file. It will be auto-generated if not provided.
file_size
integer
The size of the file in bytes.
file_data
string
File data
width
integer
The width of the image in pixels.
height
integer
The height of the image in pixels.
Generate videos from images and prompts using CogVideoX-5B
Interpolate between image frames
Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation