- More Detailed Caption
- Detailed Caption
- Caption
- Ocr
- Region To Description
- Region To Category
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/more-detailed-caption
Endpoint ID: fal-ai/florence-2-large/more-detailed-captionTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/more-detailed-caption",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Output Schema
Results from the model
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
}
Output Example
{
"results": ""
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/detailed-caption
Endpoint ID: fal-ai/florence-2-large/detailed-captionTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/detailed-caption",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Output Schema
Results from the model
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
}
Output Example
{
"results": ""
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/caption
Endpoint ID: fal-ai/florence-2-large/captionTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/caption",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Output Schema
Results from the model
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
}
Output Example
{
"results": ""
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/ocr
Endpoint ID: fal-ai/florence-2-large/ocrTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/ocr",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Output Schema
Results from the model
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
}
Output Example
{
"results": ""
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/region-to-description
Endpoint ID: fal-ai/florence-2-large/region-to-descriptionTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/region-to-description",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"region": {
"x1": 100,
"x2": 200,
"y1": 100,
"y2": 200
}
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
The user input coordinates
Output Schema
Results from the model
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"region": {
"x1": 100,
"x2": 200,
"y1": 100,
"y2": 200
}
}
Output Example
{
"results": ""
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/region-to-category
Endpoint ID: fal-ai/florence-2-large/region-to-categoryTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/region-to-category",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"region": {
"x1": 100,
"x2": 200,
"y1": 100,
"y2": 200
}
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
The user input coordinates
Output Schema
Results from the model
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"region": {
"x1": 100,
"x2": 200,
"y1": 100,
"y2": 200
}
}
Output Example
{
"results": ""
}
Related
- Florence-2 Large — Vision
- Florence-2 Large — Image Generation