# Bria Product Shot

> Place any product in any scenery with just a prompt or reference image while maintaining high integrity of the product. Trained exclusively on licensed data for safe and risk-free commercial use and optimized for eCommerce.


## Overview

- **Endpoint**: `https://fal.run/fal-ai/bria/product-shot`
- **Model ID**: `fal-ai/bria/product-shot`
- **Category**: image-to-image
- **Kind**: inference
**Tags**: product photography



## Pricing

- **Price**: $0.04 per generations

For more details, see [fal.ai pricing](https://fal.ai/pricing).

## API Information

This model can be used via our HTTP API or more conveniently via our client libraries.
See the input and output schema below, as well as the usage examples.


### Input Schema

The API accepts the following input parameters:


- **`image_url`** (`string`, _required_):
  The URL of the product shot to be placed in a lifestyle shot. If both image_url and image_file are provided, image_url will be used. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB.
  - Examples: "https://storage.googleapis.com/falserverless/bria/bria_product_fg.jpg"

- **`scene_description`** (`string`, _optional_):
  Text description of the new scene or background for the provided product shot. Bria currently supports prompts in English only, excluding special characters.
  - Examples: "on a rock, next to the ocean, dark theme"

- **`ref_image_url`** (`string`, _optional_):
  The URL of the reference image to be used for generating the new scene or background for the product shot. Use "" to leave empty.Either ref_image_url or scene_description has to be provided but not both. If both ref_image_url and ref_image_file are provided, ref_image_url will be used. Accepted formats are jpeg, jpg, png, webp. Default value: `""`
  - Default: `""`
  - Examples: "https://storage.googleapis.com/falserverless/bria/bria_product_bg.jpg"

- **`optimize_description`** (`boolean`, _optional_):
  Whether to optimize the scene description Default value: `true`
  - Default: `true`

- **`num_results`** (`integer`, _optional_):
  The number of lifestyle product shots you would like to generate. You will get num_results x 10 results when placement_type=automatic and according to the number of required placements x num_results if placement_type=manual_placement. Default value: `1`
  - Default: `1`
  - Range: `1` to `4`

- **`fast`** (`boolean`, _optional_):
  Whether to use the fast model Default value: `true`
  - Default: `true`

- **`placement_type`** (`PlacementTypeEnum`, _optional_):
  This parameter allows you to control the positioning of the product in the image. Choosing 'original' will preserve the original position of the product in the image. Choosing 'automatic' will generate results with the 10 recommended positions for the product. Choosing 'manual_placement' will allow you to select predefined positions (using the parameter 'manual_placement_selection'). Selecting 'manual_padding' will allow you to control the position and size of the image by defining the desired padding in pixels around the product. Default value: `"manual_placement"`
  - Default: `"manual_placement"`
  - Options: `"original"`, `"automatic"`, `"manual_placement"`, `"manual_padding"`

- **`original_quality`** (`boolean`, _optional_):
  This flag is only relevant when placement_type=original. If true, the output image retains the original input image's size; otherwise, the image is scaled to 1 megapixel (1MP) while preserving its aspect ratio.
  - Default: `false`

- **`shot_size`** (`list<integer>`, _optional_):
  The desired size of the final product shot. For optimal results, the total number of pixels should be around 1,000,000. This parameter is only relevant when placement_type=automatic or placement_type=manual_placement.
  - Default: `[1000,1000]`
  - Array of integer

- **`manual_placement_selection`** (`ManualPlacementSelectionEnum`, _optional_):
  If you've selected placement_type=manual_placement, you should use this parameter to specify which placements/positions you would like to use from the list. You can select more than one placement in one request. Default value: `"bottom_center"`
  - Default: `"bottom_center"`
  - Options: `"upper_left"`, `"upper_right"`, `"bottom_left"`, `"bottom_right"`, `"right_center"`, `"left_center"`, `"upper_center"`, `"bottom_center"`, `"center_vertical"`, `"center_horizontal"`

- **`padding_values`** (`list<integer>`, _optional_):
  The desired padding in pixels around the product, when using placement_type=manual_padding. The order of the values is [left, right, top, bottom]. For optimal results, the total number of pixels, including padding, should be around 1,000,000. It is recommended to first use the product cutout API, get the cutout and understand the size of the result, and then define the required padding and use the cutout as an input for this API.
  - Array of integer

- **`sync_mode`** (`boolean`, _optional_):
  If `True`, the media will be returned as a data URI and the output data won't be available in the request history.
  - Default: `false`



**Required Parameters Example**:

```json
{
  "image_url": "https://storage.googleapis.com/falserverless/bria/bria_product_fg.jpg"
}
```

**Full Example**:

```json
{
  "image_url": "https://storage.googleapis.com/falserverless/bria/bria_product_fg.jpg",
  "scene_description": "on a rock, next to the ocean, dark theme",
  "ref_image_url": "https://storage.googleapis.com/falserverless/bria/bria_product_bg.jpg",
  "optimize_description": true,
  "num_results": 1,
  "fast": true,
  "placement_type": "manual_placement",
  "shot_size": [
    1000,
    1000
  ],
  "manual_placement_selection": "bottom_center"
}
```


### Output Schema

The API returns the following output format:

- **`images`** (`list<Image>`, _required_):
  The generated images
  - Array of Image
  - Examples: [{"content_type":"image/png","url":"https://storage.googleapis.com/falserverless/bria/bria_product_res.png"}]



**Example Response**:

```json
{
  "images": [
    {
      "content_type": "image/png",
      "url": "https://storage.googleapis.com/falserverless/bria/bria_product_res.png"
    }
  ]
}
```


## Usage Examples

### cURL

```bash
curl --request POST \
  --url https://fal.run/fal-ai/bria/product-shot \
  --header "Authorization: Key $FAL_KEY" \
  --header "Content-Type: application/json" \
  --data '{
     "image_url": "https://storage.googleapis.com/falserverless/bria/bria_product_fg.jpg"
   }'
```

### Python

Ensure you have the Python client installed:

```bash
pip install fal-client
```

Then use the API client to make requests:

```python
import fal_client

def on_queue_update(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
           print(log["message"])

result = fal_client.subscribe(
    "fal-ai/bria/product-shot",
    arguments={
        "image_url": "https://storage.googleapis.com/falserverless/bria/bria_product_fg.jpg"
    },
    with_logs=True,
    on_queue_update=on_queue_update,
)
print(result)
```

### JavaScript

Ensure you have the JavaScript client installed:

```bash
npm install --save @fal-ai/client
```

Then use the API client to make requests:

```javascript
import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/bria/product-shot", {
  input: {
    image_url: "https://storage.googleapis.com/falserverless/bria/bria_product_fg.jpg"
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);
```


## Additional Resources

### Documentation

- [Model Playground](https://fal.ai/models/fal-ai/bria/product-shot)
- [API Documentation](https://fal.ai/models/fal-ai/bria/product-shot/api)
- [OpenAPI Schema](https://fal.ai/api/openapi/queue/openapi.json?endpoint_id=fal-ai/bria/product-shot)

### fal.ai Platform

- [Platform Documentation](https://docs.fal.ai)
- [Python Client](https://docs.fal.ai/clients/python)
- [JavaScript Client](https://docs.fal.ai/clients/javascript)
