- Object Detection
- Open Vocabulary Detection
- Ocr With Region
- Caption To Phrase Grounding
- Referring Expression Segmentation
- Dense Region Caption
- Region To Segmentation
- Region Proposal
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/object-detection
Endpoint ID: fal-ai/florence-2-large/object-detectionTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/object-detection",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Output Schema
Results from the model
Processed image
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
}
Output Example
{
"results": {
"bboxes": [
{
"label": ""
}
]
}
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/open-vocabulary-detection
Endpoint ID: fal-ai/florence-2-large/open-vocabulary-detectionTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/open-vocabulary-detection",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"text_input": ""
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Text input for the task
Output Schema
Results from the model
Processed image
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"text_input": ""
}
Output Example
{
"results": {
"bboxes": [
{
"label": ""
}
]
}
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/ocr-with-region
Endpoint ID: fal-ai/florence-2-large/ocr-with-regionTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/ocr-with-region",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Output Schema
Results from the model
Processed image
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
}
Output Example
{
"results": {
"quad_boxes": [
{
"label": ""
}
]
}
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/caption-to-phrase-grounding
Endpoint ID: fal-ai/florence-2-large/caption-to-phrase-groundingTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/caption-to-phrase-grounding",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"text_input": ""
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Text input for the task
Output Schema
Results from the model
Processed image
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"text_input": ""
}
Output Example
{
"results": {
"bboxes": [
{
"label": ""
}
]
}
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/referring-expression-segmentation
Endpoint ID: fal-ai/florence-2-large/referring-expression-segmentationTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/referring-expression-segmentation",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"text_input": ""
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Text input for the task
Output Schema
Results from the model
Processed image
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"text_input": ""
}
Output Example
{
"results": {
"polygons": [
{
"label": ""
}
]
}
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/dense-region-caption
Endpoint ID: fal-ai/florence-2-large/dense-region-captionTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/dense-region-caption",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Output Schema
Results from the model
Processed image
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
}
Output Example
{
"results": {
"bboxes": [
{
"label": ""
}
]
}
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/region-to-segmentation
Endpoint ID: fal-ai/florence-2-large/region-to-segmentationTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/region-to-segmentation",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"region": {
"x1": 100,
"x2": 200,
"y1": 100,
"y2": 200
}
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
The user input coordinates
Output Schema
Results from the model
Processed image
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg",
"region": {
"x1": 100,
"x2": 200,
"y1": 100,
"y2": 200
}
}
Output Example
{
"results": {
"polygons": [
{
"label": ""
}
]
}
}
Endpoint:
POST https://fal.run/fal-ai/florence-2-large/region-proposal
Endpoint ID: fal-ai/florence-2-large/region-proposalTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
import fal_client
def on_queue_update(update):
if isinstance(update, fal_client.InProgress):
for log in update.logs:
print(log["message"])
result = fal_client.subscribe(
"fal-ai/florence-2-large/region-proposal",
arguments={
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
},
with_logs=True,
on_queue_update=on_queue_update,
)
print(result)
Input Schema
The URL of the image to be processed.
Output Schema
Results from the model
Processed image
Input Example
{
"image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
}
Output Example
{
"results": {
"bboxes": [
{
"label": ""
}
]
}
}
Related
- Florence-2 Large — Vision
- Florence-2 Large — Image Generation