# Wan 2.2 14B Image Trainer

> Wan 2.2 text to image LoRA trainer. Fine-tune Wan 2.2 for subjects and styles with unprecedented detail.


## Overview

- **Endpoint**: `https://fal.run/fal-ai/wan-22-image-trainer`
- **Model ID**: `fal-ai/wan-22-image-trainer`
- **Category**: training
- **Kind**: training
**Description**: Wan 2.2 text to image LoRA trainer. Fine-tune Wan 2.2 for subjects and styles with unprecedented detail.

**Tags**: lora, personalization



## Pricing

Your request will cost **$0.0045 per step** (minimum of 100 steps is charged). For **$4.5** you can fine-tune a LoRA for **1000 steps**.

For more details, see [fal.ai pricing](https://fal.ai/pricing).

## API Information

This model can be used via our HTTP API or more conveniently via our client libraries.
See the input and output schema below, as well as the usage examples.


### Input Schema

The API accepts the following input parameters:


- **`training_data_url`** (`string`, _required_):
  URL to the training data.

- **`trigger_phrase`** (`string`, _required_):
  Trigger phrase for the model.

- **`include_synthetic_captions`** (`boolean`, _optional_):
  Whether to include synthetic captions.
  - Default: `false`

- **`use_face_detection`** (`boolean`, _optional_):
  Whether to use face detection for the training data. When enabled, images will use the center of the face as the center of the image when resizing. Default value: `true`
  - Default: `true`
  - Examples: true

- **`use_face_cropping`** (`boolean`, _optional_):
  Whether to use face cropping for the training data. When enabled, images will be cropped to the face before resizing.
  - Default: `false`
  - Examples: false

- **`use_masks`** (`boolean`, _optional_):
  Whether to use masks for the training data. Default value: `true`
  - Default: `true`
  - Examples: true

- **`steps`** (`integer`, _optional_):
  Number of training steps. Default value: `1000`
  - Default: `1000`
  - Range: `10` to `6000`
  - Examples: 1000

- **`learning_rate`** (`float`, _optional_):
  Learning rate for training. Default value: `0.0007`
  - Default: `0.0007`
  - Range: `0.000001` to `0.1`, step: `0.000001`
  - Examples: 0.0007

- **`is_style`** (`boolean`, _optional_):
  Whether the training data is style data. If true, face specific options like masking and face detection will be disabled.
  - Default: `false`
  - Examples: false



**Required Parameters Example**:

```json
{
  "training_data_url": "",
  "trigger_phrase": ""
}
```

**Full Example**:

```json
{
  "training_data_url": "",
  "trigger_phrase": "",
  "use_face_detection": true,
  "use_face_cropping": false,
  "use_masks": true,
  "steps": 1000,
  "learning_rate": 0.0007,
  "is_style": false
}
```


### Output Schema

The API returns the following output format:

- **`diffusers_lora_file`** (`File`, _required_):
  Low noise LoRA file.

- **`high_noise_lora`** (`File`, _required_):
  High noise LoRA file.

- **`config_file`** (`File`, _required_):
  Config file helping inference endpoints after training.



**Example Response**:

```json
{
  "diffusers_lora_file": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  },
  "high_noise_lora": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  },
  "config_file": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  }
}
```


## Usage Examples

### cURL

```bash
curl --request POST \
  --url https://fal.run/fal-ai/wan-22-image-trainer \
  --header "Authorization: Key $FAL_KEY" \
  --header "Content-Type: application/json" \
  --data '{
     "training_data_url": "",
     "trigger_phrase": ""
   }'
```

### Python

Ensure you have the Python client installed:

```bash
pip install fal-client
```

Then use the API client to make requests:

```python
import fal_client

def on_queue_update(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
           print(log["message"])

result = fal_client.subscribe(
    "fal-ai/wan-22-image-trainer",
    arguments={
        "training_data_url": "",
        "trigger_phrase": ""
    },
    with_logs=True,
    on_queue_update=on_queue_update,
)
print(result)
```

### JavaScript

Ensure you have the JavaScript client installed:

```bash
npm install --save @fal-ai/client
```

Then use the API client to make requests:

```javascript
import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/wan-22-image-trainer", {
  input: {
    training_data_url: "",
    trigger_phrase: ""
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);
```


## Additional Resources

### Documentation

- [Model Playground](https://fal.ai/models/fal-ai/wan-22-image-trainer)
- [API Documentation](https://fal.ai/models/fal-ai/wan-22-image-trainer/api)
- [OpenAPI Schema](https://fal.ai/api/openapi/queue/openapi.json?endpoint_id=fal-ai/wan-22-image-trainer)

### fal.ai Platform

- [Platform Documentation](https://docs.fal.ai)
- [Python Client](https://docs.fal.ai/clients/python)
- [JavaScript Client](https://docs.fal.ai/clients/javascript)
