# Z Image Trainer

> Train LoRAs on Z-Image Turbo, a super fast text-to-image model of 6B parameters developed by Tongyi-MAI.


## Overview

- **Endpoint**: `https://fal.run/fal-ai/z-image-trainer`
- **Model ID**: `fal-ai/z-image-trainer`
- **Category**: training
- **Kind**: training
**Tags**: turbo, z-image, fast, trainer



## Pricing

Your request will cost **$2.26** per **1000-step** training run. It scales per step, so a **2000-step** training run costs **$4.52**. Note: **100 steps** is the minimum step count, which will cost **$0.226**.

For more details, see [fal.ai pricing](https://fal.ai/pricing).

## API Information

This model can be used via our HTTP API or more conveniently via our client libraries.
See the input and output schema below, as well as the usage examples.


### Input Schema

The API accepts the following input parameters:


- **`image_data_url`** (`string`, _required_):
  URL to zip archive with images of a consistent style. Try to use at least 10 images, although more is better.
  
  The zip can also contain a text file for each image. The text file should be named:
  ROOT.txt
  For example:
  photo.txt
  
  This text file can be used to specify the edit instructions for the image pair.
  
  If no text file is provided, the default_caption will be used.
  
  If no default_caption is provided, the training will fail.

- **`steps`** (`integer`, _optional_):
  Total number of training steps. Default value: `1000`
  - Default: `1000`
  - Range: `100` to `10000`, step: `100`

- **`learning_rate`** (`float`, _optional_):
  Learning rate applied to trainable parameters. Default value: `0.0001`
  - Default: `0.0001`

- **`default_caption`** (`string`, _optional_):
  Default caption to use when caption files are missing. If None, missing captions will cause an error.

- **`training_type`** (`TrainingTypeEnum`, _optional_):
  Type of training to perform. Use 'content' to focus on the content of the images, 'style' to focus on the style of the images, and 'balanced' to focus on a combination of both. Default value: `"balanced"`
  - Default: `"balanced"`
  - Options: `"content"`, `"style"`, `"balanced"`



**Required Parameters Example**:

```json
{
  "image_data_url": ""
}
```

**Full Example**:

```json
{
  "image_data_url": "",
  "steps": 1000,
  "learning_rate": 0.0001,
  "training_type": "balanced"
}
```


### Output Schema

The API returns the following output format:

- **`diffusers_lora_file`** (`File`, _required_):
  URL to the trained diffusers lora weights.

- **`config_file`** (`File`, _required_):
  URL to the configuration file for the trained model.



**Example Response**:

```json
{
  "diffusers_lora_file": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  },
  "config_file": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  }
}
```


## Usage Examples

### cURL

```bash
curl --request POST \
  --url https://fal.run/fal-ai/z-image-trainer \
  --header "Authorization: Key $FAL_KEY" \
  --header "Content-Type: application/json" \
  --data '{
     "image_data_url": ""
   }'
```

### Python

Ensure you have the Python client installed:

```bash
pip install fal-client
```

Then use the API client to make requests:

```python
import fal_client

def on_queue_update(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
           print(log["message"])

result = fal_client.subscribe(
    "fal-ai/z-image-trainer",
    arguments={
        "image_data_url": ""
    },
    with_logs=True,
    on_queue_update=on_queue_update,
)
print(result)
```

### JavaScript

Ensure you have the JavaScript client installed:

```bash
npm install --save @fal-ai/client
```

Then use the API client to make requests:

```javascript
import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/z-image-trainer", {
  input: {
    image_data_url: ""
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);
```


## Additional Resources

### Documentation

- [Model Playground](https://fal.ai/models/fal-ai/z-image-trainer)
- [API Documentation](https://fal.ai/models/fal-ai/z-image-trainer/api)
- [OpenAPI Schema](https://fal.ai/api/openapi/queue/openapi.json?endpoint_id=fal-ai/z-image-trainer)

### fal.ai Platform

- [Platform Documentation](https://docs.fal.ai)
- [Python Client](https://docs.fal.ai/clients/python)
- [JavaScript Client](https://docs.fal.ai/clients/javascript)
