# Train Flux Krea LoRA

> Train styles, people and other subjects at blazing speeds using the FLUX.1 Krea [dev] base model.


## Overview

- **Endpoint**: `https://fal.run/fal-ai/flux-krea-trainer`
- **Model ID**: `fal-ai/flux-krea-trainer`
- **Category**: training
- **Kind**: training
**Tags**: lora, personalization



## Pricing

Your request will cost **$2 per training run** (scales linearly with steps). For **$2** you can run this model with **approximately 1 times**.

For more details, see [fal.ai pricing](https://fal.ai/pricing).

## API Information

This model can be used via our HTTP API or more conveniently via our client libraries.
See the input and output schema below, as well as the usage examples.


### Input Schema

The API accepts the following input parameters:


- **`images_data_url`** (`string`, _required_):
  URL to zip archive with images. Try to use at least 4 images in general the more the better.
  
  In addition to images the archive can contain text files with captions. Each text file should have the same name as the image file it corresponds to.

- **`trigger_word`** (`string`, _optional_):
  Trigger word to be used in the captions. If None, a trigger word will not be used.
  If no captions are provide the trigger_word will be used instead of captions. If captions are the trigger word will not be used.

- **`create_masks`** (`boolean`, _optional_):
  If True segmentation masks will be used in the weight the training loss. For people a face mask is used if possible. Default value: `true`
  - Default: `true`

- **`steps`** (`integer`, _optional_):
  Number of steps to train the LoRA on.
  - Range: `1` to `10000`
  - Examples: 1000

- **`is_style`** (`boolean`, _optional_):
  If True, the training will be for a style. This will deactivate segmentation, captioning and will use trigger word instead. Use the trigger word to specify the style.
  - Default: `false`

- **`is_input_format_already_preprocessed`** (`boolean`, _optional_):
  Specifies whether the input data is already in a processed format. When set to False (default), the system expects raw input where image files and their corresponding caption files share the same name (e.g., 'photo.jpg' and 'photo.txt'). Set to True if your data is already in a preprocessed format.
  - Default: `false`

- **`data_archive_format`** (`string`, _optional_):
  The format of the archive. If not specified, the format will be inferred from the URL.



**Required Parameters Example**:

```json
{
  "images_data_url": ""
}
```

**Full Example**:

```json
{
  "images_data_url": "",
  "create_masks": true,
  "steps": 1000
}
```


### Output Schema

The API returns the following output format:

- **`diffusers_lora_file`** (`File`, _required_):
  URL to the trained diffusers lora weights.

- **`config_file`** (`File`, _required_):
  URL to the training configuration file.

- **`debug_preprocessed_output`** (`File`, _optional_):
  URL to the preprocessed images.



**Example Response**:

```json
{
  "diffusers_lora_file": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  },
  "config_file": {
    "url": "",
    "content_type": "image/png",
    "file_name": "z9RV14K95DvU.png",
    "file_size": 4404019
  }
}
```


## Usage Examples

### cURL

```bash
curl --request POST \
  --url https://fal.run/fal-ai/flux-krea-trainer \
  --header "Authorization: Key $FAL_KEY" \
  --header "Content-Type: application/json" \
  --data '{
     "images_data_url": ""
   }'
```

### Python

Ensure you have the Python client installed:

```bash
pip install fal-client
```

Then use the API client to make requests:

```python
import fal_client

def on_queue_update(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
           print(log["message"])

result = fal_client.subscribe(
    "fal-ai/flux-krea-trainer",
    arguments={
        "images_data_url": ""
    },
    with_logs=True,
    on_queue_update=on_queue_update,
)
print(result)
```

### JavaScript

Ensure you have the JavaScript client installed:

```bash
npm install --save @fal-ai/client
```

Then use the API client to make requests:

```javascript
import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/flux-krea-trainer", {
  input: {
    images_data_url: ""
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);
```


## Additional Resources

### Documentation

- [Model Playground](https://fal.ai/models/fal-ai/flux-krea-trainer)
- [API Documentation](https://fal.ai/models/fal-ai/flux-krea-trainer/api)
- [OpenAPI Schema](https://fal.ai/api/openapi/queue/openapi.json?endpoint_id=fal-ai/flux-krea-trainer)

### fal.ai Platform

- [Platform Documentation](https://docs.fal.ai)
- [Python Client](https://docs.fal.ai/clients/python)
- [JavaScript Client](https://docs.fal.ai/clients/javascript)
