# LLaVA v1.6 34B

> Vision


## Overview

- **Endpoint**: `https://fal.run/fal-ai/llava-next`
- **Model ID**: `fal-ai/llava-next`
- **Category**: vision
- **Kind**: inference
**Tags**: multimodal, vision



## Pricing

- **Price**: $0 per compute seconds

For more details, see [fal.ai pricing](https://fal.ai/pricing).

## API Information

This model can be used via our HTTP API or more conveniently via our client libraries.
See the input and output schema below, as well as the usage examples.


### Input Schema

The API accepts the following input parameters:


- **`image_url`** (`string`, _required_):
  URL of the image to be processed
  - Examples: "https://llava-vl.github.io/static/images/monalisa.jpg"

- **`prompt`** (`string`, _required_):
  Prompt to be used for the image
  - Examples: "Do you know who drew this painting?"

- **`max_tokens`** (`integer`, _optional_):
  Maximum number of tokens to generate Default value: `64`
  - Default: `64`

- **`temperature`** (`float`, _optional_):
  Temperature for sampling Default value: `0.2`
  - Default: `0.2`

- **`top_p`** (`float`, _optional_):
  Top P for sampling Default value: `1`
  - Default: `1`
  - Range: `0` to `1`



**Required Parameters Example**:

```json
{
  "image_url": "https://llava-vl.github.io/static/images/monalisa.jpg",
  "prompt": "Do you know who drew this painting?"
}
```

**Full Example**:

```json
{
  "image_url": "https://llava-vl.github.io/static/images/monalisa.jpg",
  "prompt": "Do you know who drew this painting?",
  "max_tokens": 64,
  "temperature": 0.2,
  "top_p": 1
}
```


### Output Schema

The API returns the following output format:

- **`output`** (`string`, _required_):
  Generated output
  - Examples: "Leonardo da Vinci"

- **`partial`** (`boolean`, _optional_):
  Whether the output is partial
  - Default: `false`



**Example Response**:

```json
{
  "output": "Leonardo da Vinci"
}
```


## Usage Examples

### cURL

```bash
curl --request POST \
  --url https://fal.run/fal-ai/llava-next \
  --header "Authorization: Key $FAL_KEY" \
  --header "Content-Type: application/json" \
  --data '{
     "image_url": "https://llava-vl.github.io/static/images/monalisa.jpg",
     "prompt": "Do you know who drew this painting?"
   }'
```

### Python

Ensure you have the Python client installed:

```bash
pip install fal-client
```

Then use the API client to make requests:

```python
import fal_client

def on_queue_update(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
           print(log["message"])

result = fal_client.subscribe(
    "fal-ai/llava-next",
    arguments={
        "image_url": "https://llava-vl.github.io/static/images/monalisa.jpg",
        "prompt": "Do you know who drew this painting?"
    },
    with_logs=True,
    on_queue_update=on_queue_update,
)
print(result)
```

### JavaScript

Ensure you have the JavaScript client installed:

```bash
npm install --save @fal-ai/client
```

Then use the API client to make requests:

```javascript
import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/llava-next", {
  input: {
    image_url: "https://llava-vl.github.io/static/images/monalisa.jpg",
    prompt: "Do you know who drew this painting?"
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);
```


## Additional Resources

### Documentation

- [Model Playground](https://fal.ai/models/fal-ai/llava-next)
- [API Documentation](https://fal.ai/models/fal-ai/llava-next/api)
- [OpenAPI Schema](https://fal.ai/api/openapi/queue/openapi.json?endpoint_id=fal-ai/llava-next)

### fal.ai Platform

- [Platform Documentation](https://docs.fal.ai)
- [Python Client](https://docs.fal.ai/clients/python)
- [JavaScript Client](https://docs.fal.ai/clients/javascript)
