# ControlNeXt SVD

> Animate a reference image with a driving video using ControlNeXt.


## Overview

- **Endpoint**: `https://fal.run/fal-ai/controlnext`
- **Model ID**: `fal-ai/controlnext`
- **Category**: video-to-video
- **Kind**: inference
**Tags**: animation, stylized



## Pricing

- **Price**: $0 per compute seconds

For more details, see [fal.ai pricing](https://fal.ai/pricing).

## API Information

This model can be used via our HTTP API or more conveniently via our client libraries.
See the input and output schema below, as well as the usage examples.


### Input Schema

The API accepts the following input parameters:


- **`image_url`** (`string`, _required_):
  URL of the reference image.
  - Examples: "https://storage.googleapis.com/falserverless/model_tests/musepose/ref.png"

- **`video_url`** (`string`, _required_):
  URL of the input video.
  - Examples: "https://storage.googleapis.com/falserverless/model_tests/musepose/dance.mp4"

- **`height`** (`integer`, _optional_):
  Height of the output video. Default value: `1024`
  - Default: `1024`
  - Range: `64` to `1024`

- **`width`** (`integer`, _optional_):
  Width of the output video. Default value: `576`
  - Default: `576`
  - Range: `64` to `1024`

- **`guidance_scale`** (`float`, _optional_):
  Guidance scale for the diffusion process. Default value: `3`
  - Default: `3`
  - Range: `0.1` to `10`

- **`num_inference_steps`** (`integer`, _optional_):
  Number of inference steps. Default value: `25`
  - Default: `25`
  - Range: `1` to `100`

- **`max_frame_num`** (`integer`, _optional_):
  Maximum number of frames to process. Default value: `240`
  - Default: `240`
  - Range: `1` to `1000`

- **`batch_frames`** (`integer`, _optional_):
  Number of frames to process in each batch. Default value: `24`
  - Default: `24`
  - Range: `1` to `50`

- **`overlap`** (`integer`, _optional_):
  Number of overlapping frames between batches. Default value: `6`
  - Default: `6`
  - Range: `0` to `20`

- **`sample_stride`** (`integer`, _optional_):
  Stride for sampling frames from the input video. Default value: `2`
  - Default: `2`
  - Range: `1` to `10`

- **`decode_chunk_size`** (`integer`, _optional_):
  Chunk size for decoding frames. Default value: `2`
  - Default: `2`
  - Range: `1` to `10`

- **`motion_bucket_id`** (`float`, _optional_):
  Motion bucket ID for the pipeline. Default value: `127`
  - Default: `127`
  - Range: `0` to `255`

- **`fps`** (`integer`, _optional_):
  Frames per second for the output video. Default value: `7`
  - Default: `7`
  - Range: `1` to `60`

- **`controlnext_cond_scale`** (`float`, _optional_):
  Condition scale for ControlNeXt. Default value: `1`
  - Default: `1`
  - Range: `0.1` to `10`



**Required Parameters Example**:

```json
{
  "image_url": "https://storage.googleapis.com/falserverless/model_tests/musepose/ref.png",
  "video_url": "https://storage.googleapis.com/falserverless/model_tests/musepose/dance.mp4"
}
```

**Full Example**:

```json
{
  "image_url": "https://storage.googleapis.com/falserverless/model_tests/musepose/ref.png",
  "video_url": "https://storage.googleapis.com/falserverless/model_tests/musepose/dance.mp4",
  "height": 1024,
  "width": 576,
  "guidance_scale": 3,
  "num_inference_steps": 25,
  "max_frame_num": 240,
  "batch_frames": 24,
  "overlap": 6,
  "sample_stride": 2,
  "decode_chunk_size": 2,
  "motion_bucket_id": 127,
  "fps": 7,
  "controlnext_cond_scale": 1
}
```


### Output Schema

The API returns the following output format:

- **`video`** (`File`, _required_):
  The generated video.



**Example Response**:

```json
{
  "video": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  }
}
```


## Usage Examples

### cURL

```bash
curl --request POST \
  --url https://fal.run/fal-ai/controlnext \
  --header "Authorization: Key $FAL_KEY" \
  --header "Content-Type: application/json" \
  --data '{
     "image_url": "https://storage.googleapis.com/falserverless/model_tests/musepose/ref.png",
     "video_url": "https://storage.googleapis.com/falserverless/model_tests/musepose/dance.mp4"
   }'
```

### Python

Ensure you have the Python client installed:

```bash
pip install fal-client
```

Then use the API client to make requests:

```python
import fal_client

def on_queue_update(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
           print(log["message"])

result = fal_client.subscribe(
    "fal-ai/controlnext",
    arguments={
        "image_url": "https://storage.googleapis.com/falserverless/model_tests/musepose/ref.png",
        "video_url": "https://storage.googleapis.com/falserverless/model_tests/musepose/dance.mp4"
    },
    with_logs=True,
    on_queue_update=on_queue_update,
)
print(result)
```

### JavaScript

Ensure you have the JavaScript client installed:

```bash
npm install --save @fal-ai/client
```

Then use the API client to make requests:

```javascript
import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/controlnext", {
  input: {
    image_url: "https://storage.googleapis.com/falserverless/model_tests/musepose/ref.png",
    video_url: "https://storage.googleapis.com/falserverless/model_tests/musepose/dance.mp4"
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);
```


## Additional Resources

### Documentation

- [Model Playground](https://fal.ai/models/fal-ai/controlnext)
- [API Documentation](https://fal.ai/models/fal-ai/controlnext/api)
- [OpenAPI Schema](https://fal.ai/api/openapi/queue/openapi.json?endpoint_id=fal-ai/controlnext)
- [GitHub Repository](https://github.com/dvlab-research/ControlNeXt/blob/main/LICENSE)

### fal.ai Platform

- [Platform Documentation](https://docs.fal.ai)
- [Python Client](https://docs.fal.ai/clients/python)
- [JavaScript Client](https://docs.fal.ai/clients/javascript)
